| In the face of the current hot development of autonomous driving,the demand of vehicle autonomous navigation functions,path planning functions and environmental perception capabilities is increasing gradually.Subsequently the development of SLAM technology has attracted exceptional attention.The current single-sensor SLAM algorithm has its inevitable problems of the sensor itself,especially in the face of complex and harsh environment.Therefore,the fusion of multiple sensors is needed to improve the accuracy,redundancy and robustness of SLAM algorithm.We propose a novel algorithm that fuses IMU,monocular camera and lidar in this paper.Firstly,the information obtained by the three sensors is preprocessed by fusion,then the lidar-inertial odometry is obtained by combined motion estimation of laser and inertial,and the visual-inertial odometry is obtained by combined motion estimation of vision and inertial.Finally,IMU pre-integral factor,lidar odometry factor,visual odometry factor and loop constraint factor constitute factor graph,so as to complete the global pose optimization and output global pose estimation and point cloud map after optimization.The loop detection in this paper is a combination of laser and vision detection,and the loop constraint factor is obtained through ICP registration to eliminate the accumulative error of the system.The main research contents are as follows:Firstly,the original information obtained by the sensors is fusion preprocessed.The position,velocity and direction are decoupled by IMU pre-integration processing,so that the current pre-integration value only needs to be calculated from the current measurement value and the previous key frame pre-integration value.For the original laser point cloud,the laser point cloud is preprocessed by distortion compensation and segmentation.For visual information,the feature correspondence between image key frames is obtained through the extraction and tracking of visual point features and line features.Secondly,a tightly coupled lidar-visual-inertial odometry is obtained by motion estimation based on the fusion information preprocessed.Based on the initial pose provided by the IMU pre-integral,the lidar-inertial odometry is obtained according to the laser point cloud motion estimation by the two-step LM optimization method.By using the point cloud depth to recover the 2D visual features associated with the laser,the visual-inertial odometry is obtained according to the pose estimation method combining the 3D visual point and line features.Then,in order to solve the problem of error accumulation in the pose estimation obtained by lidar-visual-inertial odometry,a dual closed-loop detection scheme of visual loop detection and laser near neighbor detection is adopted in this paper to reduce the error rate of loop detection.The optimized global pose estimation and consistent point cloud map were obtained by optimizing the loop constraint factor,IMU pre-integration factor,visual odometer factor and laser odometer factor.Finally,the algorithm is tested on the KITTI_00 data set and the actual campus scene.For the KITTI data set,the absolute root-mean-square square error of the algorithm of this paper is 11.5m lower than that of Le Go-LOAM.The outline of the mapping is clearer,and the global pose is more consistent.The actual campus data is collected in three different scenes.In the three scenes,the positioning accuracy of the algorithm of this paper is better than that of Le Go-LOAM,especially in the TIB scene with more complex acquisition path,the algorithm error of this paper is and the error reduction range of the three scenes is 3%~25%. |