Font Size: a A A

Research On SLAM Technology Based On Monocular Vision And Inertial Navigation Fusion

Posted on:2020-03-08Degree:MasterType:Thesis
Country:ChinaCandidate:L WangFull Text:PDF
GTID:2428330596995221Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping(SLAM),as the core technology to realize the complete autonomous mobile of mobile robots,is a hotspot in the field of robotics research.With the rapid development of computer technology and computer vision in recent years,SLAM technology based on camera as the main sensor has become the focus of SLAM research.In visual SLAM,when the carrier moves too fast,the environment features are missing and the image effect is not effective,the image captured by the camera may have motion blur or too few matching features between adjacent frames.It may lead to the rapid decline of robustness and accuracy of SLAM system,and even the failure of the system.Therefore,it is especially important to integrate the information of other sensors in the work process to make up for the deficiencies.Inertial Measurement Unit(IMU)can measure the angular velocity and acceleration of the carrier motion,and calculate the velocity,attitude and position of the carrier in real time.IMU is able to estimate the pose of the carrier very well in the case of fast motion in short time,which is the weakness of the camera in the visual SLAM mentioned above.However,the IMU measurement will have obvious drift(cumulative error)with time in the course of slow motion,while the camera measurement data will not drift.In this way,the cumulative error of IMU can be estimated and corrected by the camera data,so that the estimation of the carrier's pose is still valid after the slow motion.Inertial sensor(IMU)and camera sensor complement each other obviously,which makes Visual-Inertial SLAM have more research value and prospect.Based on the monocular Visual-Inertial SLAM theory proposed by Raul Mur-Artal et al.,based on ORB-SLAM2 system,a scheme of SLAM system with tightly-coupled monocular vision and inertial navigation information is proposed.At the front end,the initial pose of the current frame is predicted by pre-integrating IMU data between the previous keyframe and the current keyframe.Once the camera pose is successfully estimated,the points on the local map are projected onto the current keyframe andmatched with the feature points on the current keyframe.The current frame is optimized by minimizing the feature re-projection error and IMU error terms of all matching points.In the back-end optimization,the keyframes in the sliding window are not removed,and the appropriate size of the sliding window is chosen to ensure real-time performance.The back end of the Visual-Inertial SLAM system is to optimize both camera data error and IMU data error in the sliding window.In the process of Loop Closing and global optimization,the strategy is to perform Pose Graph on 6 DOF.The speed can be corrected by correcting the forward rotation speed of the relevant keyframes,so that the accuracy can be guaranteed in the local optimization process,and the IMU information can be used immediately after the optimization of the Pose Graph.Finally,global optimization is performed in parallel threads of the system to optimize all states.The paper verifies the performance of the Visual-Inertial SLAM system through the EuRoC dataset and actual mobile robot experiments.Fully contrasted with the original monocular ORB-SLAM2 system,it is shown that the proposed Visual-Inertial SLAM system has no significant improvement in accuracy compared with the single visual SLAM system,but its robustness is improved.
Keywords/Search Tags:Simultaneous Localization and Mapping(SLAM), Monocular Vision, Inertial Measurement Unit, Sensor Fusion, Tightly-Coupled
PDF Full Text Request
Related items