Font Size: a A A

Research On SLAM System Based On Stereo Vision And Inertial Navigation

Posted on:2023-11-16Degree:MasterType:Thesis
Country:ChinaCandidate:C L ZhangFull Text:PDF
GTID:2568306770484104Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
With the popularization of self-driving cars,SLAM(Simultaneous Localization and Mapping)positioning technology has been widely concerned by domestic and foreign researchers.By processing sensor data such as lidar or camera,it provides self-driving cars with the functions of simultaneous mapping and positioning in unknown environments,and assists vehicles to complete decision-making and path planning to perform autonomous driving tasks.Visual SLAM uses the camera as the main sensor,which can obtain a large amount of information from the environment,since the camera follows the optical principle for imaging,it is more sensitive to external light.The current mainstream SLAM algorithms mainly rely on the extraction of feature points to complete the calculation of the odometer,and it is difficult to perform effective positioning in the absence of texture or weak texture environment.In addition,most of the traditional SLAM algorithms are based on the assumption of a static environment.When there are many dynamic objects in the running environment,a large amount of errors will be introduced due to incorrect feature matching on the dynamic objects,which will deviate from the true pose.To solve the above problems,this paper proposed a SLAM system based on stereo vision and inertial navigation.First,a method to quickly remove dynamic features was proposed to filter out static features,which would be used for subsequent odometry calculations.And then the Inertial Measurement Unit(IMU)was fused with the binocular camera,to solve the problem of positioning failure when the camera moves rapidly or operates under conditions such as illumination changes and weak textures.The main contents of this paper are as follows:(1)This paper studied the projection and distortion model of the binocular camera,and deduced the coordinate transformation relationship from the 3D physical world to the image frame.The construction of the camera distortion model laid the foundation for the accurate calculation of the subsequent visual odometry.(2)In order to reduce the interference of dynamic objects to visual SLAM,a cluster-based feature segmentation and dynamic feature culling method was proposed.First,a feature motion vector based on two frames of images was constructed,and a set of feature motion vectors was generated by calculating the matching relationship between the features on the front and rear image frames.Then Canopy and Fuzzy C-Means(FCM)clustering were used jointly to segment the feature points with different motion characteristics.Finally,the motion consistency was used to verify and eliminate the dynamic features.(3)This paper studied the construction method of visual odometry,and a visual only dynamic SLAM system based on proposed dynamic feature culling method was constructed.First,the inverse combined optical flow is used to track the feature points in the previous frame.Then the static feature points is used to solve the pose transformation of the camera,the proposed dynamic feature culling method is used to eliminate the dynamic feature points,and additional features are extracted only when feature points are not enough.Finally,the online bag-of-words model is used for loop closure detection,and the Levenberg-Marquardt method is used to minimize the reprojection error for back-end optimization.(4)In order to solve the problem that visual only SLAM is prone to drift under illumination changes and weak texture environments,this paper fused IMU with camera to construct a binocular visual-inertial SLAM system.First,an inertial measurement model was constructed,the IMU data was processed by pre-integration,and the visual-inertial alignment process was studied.Then,based on the calibration information of the binocular camera and the IMU,the inertial residual is added on the basis of visual only SLAM to construct a complete visual inertial optimization module,to reduce the cumulative drift caused by the long-term operation of the system.(5)In order to verify the effectiveness of the proposed algorithms,the comparison experiments between the OV2 SLAM algorithm and the dynamic visual SLAM in this paper was carried out based on the TUM dynamic dataset.The comparison experiments between the dynamic visual SLAM and the visual inertial SLAM was carried out based on the KITTI dataset.The INDEMIND binocular visual-inertial module is used to collect multiple sets of data in a dynamic environment,and experiments were carried out based on the ROS platform to verify the effectiveness of the proposed SLAM algorithm.The experimental results showed that the proposed dynamic segmentation algorithm can eliminate most of the dynamic features and enhance the robustness of visual SLAM in dynamic environments.The system can run in real time without using the GPU,the proposed visual-inertial SLAM further reduced the absolute pose error and relative pose error.It also reduced the possibility of falsely rejecting static features in the dynamic feature rejection stage,and enhanced the stability of the SLAM system.
Keywords/Search Tags:SLAM, stereo vision, IMU, dynamic environment, multi-sensor fusion
PDF Full Text Request
Related items