SLAM(Simultaneous Localization and Mapping)technology is the key technology for autonomous action and interaction of mobile robots.It has been widely used in autonomous navigation,unmanned driving,and drone monitoring.Laser SLAM can obtain accurate distance information and build dense point cloud maps,but it lacks texture information and is prone to localization failure in scenes with few geometric features;visual SLAM has rich texture information and strong scene perception capabilities,compared to Monocular visual SLAM and panoramic visual SLAM have a wider range of information perception and faster and more complete information acquisition,but visual SLAM is sensitive to illumination,and the sparse maps established by it are difficult to be further applied in other fields.In view of the limitations of the above single sensor SLAM,this paper studies the various modules involved in the SLAM technology of panoramic vision and lidar fusion,including key technologies such as panoramic vision SLAM,matching and construction of mixed features,joint pose estimation and optimization,and proposes a The SLAM solution integrating panoramic vision and lidar makes full use of the advantages of the two types of sensors to achieve a SLAM solution with higher accuracy and greater robustness.The main research contents are as follows:(1)Panoramic visual SLAM.Aiming at the problems of small field of view,easy tracking loss and low positioning accuracy of monocular vision SLAM,this paper studies the SLAM technology based on panoramic vision,stitches panoramic images according to spherical imaging model,and uses SPHORB algorithm to extract and match panoramic images.The spherical polar geometry and spherical EPn P are derived,and a new spherical reprojection error model is defined to realize the spherical pose estimation and optimization method based on panoramic vision,and finally realize the complete panoramic vision SLAM system.(2)Construction and matching of mixed features.Aiming at the problems that visual features are easily affected by illumination and laser features lack texture information,this paper proposes a more discriminative hybrid feature and a corresponding matching method,which uses the depth information of laser point clouds to expand the dimension of visual features and construct depth information enhancement.The ORB visual features of the point cloud together with the line and surface features of the point cloud constitute a more discriminative hybrid feature.In the matching stage,this paper calculates the main direction angle difference and depth difference of all matching point pairs,and uses its histogram to quickly eliminate false matching,so as to achieve more robust hybrid feature matching and data association.(3)Joint pose estimation and optimization.Aiming at the problems of insufficient multi-sensor information fusion and low accuracy of pose estimation,this paper proposes a joint pose estimation method based on panoramic vision and lidar fusion.Based on the point-to-point line-surface matching relationship of mixed features,3D constraints and 2D constraints are constructed at the same time;the two types of constraints with different dimensions of information are discussed and solutions are proposed;finally,the Levenberg-Marquardt algorithm is used to optimize the pose.Aiming at the cumulative error generated by the pose estimation between frames,a local feature map containing both visual features and point cloud features is constructed,and the pose is re-optimized through the frame matching feature map to improve the overall pose accuracy of the SLAM system. |