With the rapid development of science and technology,artificial intelligence becomes the darling of technology in the new era.As one of the most important research areas of artificial intelligence,autonomous driving also set off a global research boom.Autonomous driving technology is mainly divided into four areas: perception,decisionmaking,planning and control.Among them,perception is called the wisdom eye of autonomous driving,which is an important link between unmanned vehicles.And the external environment,and perception is the basis of the other three technologies.The perception of autonomous driving technology interacts with the external environment through sensors,such as cameras,lidars,millimeter-wave radars,ultrasonic radars and so on.A single sensor has some unavoidable defects.For example,the resolution of camera is relatively high and it has rich color information,but it is very sensitive to light.When it is dark at night or entering a tunnel,the value of the camera will be greatly discounted.In addition,lidar uses infrared light that is not sensitive to light.And it makes up for the depth information that the camera misses.The resolution of a single lidar is very low which reduce the ability of unmanned vehicles to perceive and recognize obstacles.And it is necessary to take some measures to improve the resolution of lidar so that it can improve the ability of perception of unmanned vehicles.To this end,this article makes deep research for the fusion of the two kinds of sensors of multi-lidars and camera,and applies the fusion system to scene recognition.For the problem of point cloud registration algorithms of lidars,such as the traditional ICP algorithm and the NDT algorithm.The flaw of ICP algorithm is that improper initialization can cause the iterative process to fail to converge in the correct direction,which will cause the algorithm to fall into a local optimum.The main flaws of NDT algorithm are convergence field difference,discontinuity of cost function,and unreliable attitude estimation of sparse point cloud in outdoor environment.For the existing fusion algorithm of lidar and camera,there are errors in corner extraction of image and point cloud due to lighting problems.For pure image scene recognition based on DBoW2 algorithm,the main drawback is that the result of scene recognition fails once the image recognition is wrong.In view of the above problems,the main contents of the article are as following:(1)For the registration algorithm of multi-lidars,the paper presents a new method based on the artificial fish swarm optimization algorithm.It is applied to the problem of point cloud registration so that it solves the pose transformation between multiple lidars.The method is not affected by initialization,reduces the registration time,and achieves a smaller registration error effect.(2)In the process of realizing the fusion algorithm of camera and lidar,the paper presents a method based on plane and line fitting to calculate the corner coordinates in 3D point cloud.Then the pixel coordinate of the corner points in the image are obtained accurately by using the self-made calibration board and manual point finding method.As a result,the obtained transformation between camera and lidar is more accurate,and two kinds of sensors of camera and lidar are fused successfully.(3)In the end,the paper applies the multi-sensors fusion to scene recognition and improves the original DBoW2 algorithm.It verifies and corrects the result of scene recognition by the data of point cloud that it is synchronized with the image.Finally it achieves a higher recognition success rate than the original algorithm,which proved that the fusion of multi-lidars and camera has certain research significance and practical value. |