| In order to adapt to the environmental of Global Navigation Satellite System(GNSS)rejection,autonomous vehicles need to use the sensors carried by the carrier itself to realize passive autonomous navigation.No matter inertial navigation system,vision sensor or laser radar have their own advantages and disadvantages,they all have certain limitations in the navigation application of autonomous vehicles.This paper is devoted to solving the navigation problem of autonomous vehicles in GNSS rejection environment.By giving full play to the advantages of camera and IMU,the accuracy and robustness of the system are improved,and finally the applicability of the system is improved.In this thesis,the robustness of visual-inertia Simultaneous Localization And Mapping(SLAM)algorithm is deeply studied for the automatic driving scenarios,and the main research contents are as follows:Aiming at the problem of visual-inertia SLAM being interfered by dynamic objects in autonomous driving dynamic scenes,this paper proposes a visual-inertia SLAM algorithm that integrates dynamic target detection models.A dynamic object detection model is trained through the YOLOv5 object detection algorithm,and the module is integrated into the feature tracking module of the visual front-end to eliminate dynamic feature points,achieving the goal of eliminating dynamic interference.In addition,a tightly-coupled fusion of GPS sensors and Visual-Inertial Odometry(VIO)eliminates the cumulative error of navigation algorithm in large-scale scenes.Comparative evaluation experiments on KITTI and real vehicle datasets show that the proposed algorithm improves positioning accuracy by about 40%,providing technical and theoretical guidance for autonomous vehicle navigation in dynamic environments.Aiming at the limitations of visual-inertia SLAM navigation algorithm for autonomous vehicles operating in weak texture environments and motion blurred scenes,this paper proposes a visual-inertia SLAM algorithm that based on point-line features.The proposed fusion strategy weights the re-projection error of the fused point line features based on the texture richness of the point line features in the image.In addition,an improved EDLine line feature extraction algorithm is proposed and a line segment merging algorithm is proposed to solve the problem of redundant and mismatched line features.Contrast experiments show that the complementary fusion of point and line features can significantly improve the robustness of navigation algorithms in weak texture environments and achieve higher accuracy.In order to solve the problem that visual-inertia SLAM cannot operate in complex environments such as rainy,foggy,and snowy days,this paper proposes a tightly-coupled lidar,camera,and Inertial Measurement Unit(IMU)method based on factor graph optimization to further improve the positioning and mapping performance of autonomous vehicles in complex environments.The proposed method introduces laser point clouds into a sliding window based on visual-inertia,and achieves global positioning and mapping of multi-source fusion SLAM through nonlinear optimization of multi-source data fusion.The experimental results of dataset evaluation show that the algorithm can perform high-precision positioning and mapping in complex environments. |