| Simultaneous Localization and Mapping(SLAM)refers to the technology that the carrier realizes its own positioning and map construction through sensors such as cameras and lidar in an unknown environment.It is the technical basis of robot autonomous navigation and is widely used in autonomous driving,unmanned inspection,virtual reality and other fields.However,in the actual working process,bad weather such as rain and fog environment will interfere with the sensor,reduce the image contrast,and make it difficult to extract feature points.At the same time,the detection range of lidar is reduced,resulting in raindrop noise,which makes it difficult for SLAM to complete feature matching,seriously affecting the positioning accuracy of SLAM and the accuracy of map construction.Aiming at the problem that it is difficult to locate and map with high precision in rain and fog environment,this paper proposes a tightly coupled laser visual inertial odometer based on multi-sensor fusion,and verifies the effectiveness of the algorithm through real vehicle experiments.The main research contents are as follows:Aiming at the problem that the water mist will reduce the contrast of the collected image in the rain and fog environment,which makes it difficult to extract the visual feature points,an image defogging algorithm based on dark channel prior is proposed.The guided filtering method is introduced to estimate the image pixel-level transmission rate,and the image atmospheric light value is determined according to the dark channel prior method.Finally,the image contrast is improved by the image transmission rate and the atmospheric light value to achieve the defogging effect.On the basis of image dehazing,the point cloud hierarchical filtering algorithm is used to remove the raindrop noise in the point cloud.Firstly,the statistical characteristics of raindrop noise are analyzed,and the radius filtering is carried out according to the point cloud density in its distribution range.Then,the mean filtering is carried out on the global point cloud to eliminate the residual noise and realize the filtering of point cloud raindrop noise.Aiming at the problem that the camera image is sensitive to illumination changes and strenuous motion,and the lidar is prone to degradation in structurally similar scenes,a tightly coupled lidar visual inertial odometry is proposed.The spherical projection method is used to realize the fusion of visual features and laser point cloud,use KD-tree to find the point cloud plane closest to the visual feature point,complete the image point cloud registration,so that the image feature point can obtain the high-precision depth information of the laser radar.IMU pre-integration is used as the initial value of pose change.Based on the registered feature points,optical flow method is used for motion estimation to realize the tight coupling between camera and lidar.Aiming at the problem that SLAM positioning accuracy decreases with the accumulation of front-end errors,a global pose optimization algorithm based on factor graph is proposed.Firstly,based on IMU pre-integration,visual odometry,lidar odometry and other pose constraints,the factor map is jointly constructed,and the i SAM2 is used to optimize the factor map globally.Then,the loop detection factor combined with visual word bag and point cloud matching is designed.The loop detection effectively reduces the error accumulation and improves the accuracy of SLAM.Simulation and real vehicle experiments show that compared with traditional SLAM methods such as ALOAM and Le Go-LOAM,the proposed method has higher positioning accuracy and robustness in rain and fog environment. |