Font Size: a A A

Research On Visual Inertial SLAM Algorithm Using Point And Line Features

Posted on:2021-06-24Degree:MasterType:Thesis
Country:ChinaCandidate:D Z QiuFull Text:PDF
GTID:2518306569997939Subject:IC Engineering
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping is one of the advanced technologies for developing mobile robots.It is applied to locate robots in an unfamiliar environment and build maps of the surrounding environment.Visual SLAM is vulnerable to illumination and motion blur,it is difficult to achieve stable tracking for visual features,resulting in a decrease in localization accuracy.IMU sensor can improve the positioning performance of the system when visual features tracking failed.Besides,there are rich line features in man-made environments.Using point and line features can improve the robustness of visual constrains.To achieve precise positioning of the robot in a complex environment,we build a visual-inertial SLAM system using point and line features.The optical flow method is used to track point features,and Fast Line Detector method is used to extract line features.Then,the line segments are matched using the Line Band Descriptor.The system states including pose and landmark,are optimized in a sliding window by minimizing visual reprojection error and IMU residual.The spatial point and line features are optimized using inverse depth and orthonormal representation of the Plücker coordinate,respectively.The Jacobian matrix corresponding to each visual reprojection error is deduced and analyzed.In addition,loop detection and pose graph optimization are added to eliminate the accumulated drifts.Bag-of-words is used to calculate the similarity between two keyframe images in loop detection.When loop detection identifies places that have already visited,keyframe poses are optimized in pose graph optimization to correct accumulated drift.We build a monocular visual-inertial SLAM system and evaluate it in a MAV dataset.The experiment shows that the proposed system can achieve high localization accuracy.After the loop detection and pose graph optimization,the root mean square of absolute trajectory error is 0.085 m on average.The indoor ground robots that moving on a planar,are mostly at constant speed.Without accelerometer excitation,the monocular visual-inertial system suffers scale unobservability,leading to scale drift and inaccurate estimations.To improve the accuracy and robustness of the system,we utilize the depth information from the depth camera,and extract the spatial visual features using depth data,then use them in system initialization and pose estimation.The depth image contains the absolute scale information,which can provide effective corrections for scale drifts,and significantly improve localization accuracy.Finally,the RGBD visual-inertial system is evaluated on color-depth-inertial dataset collecting by ground robot and handheld device.The experiment shows that depth information can effectively improve the localization accurary in indoor enviroment.In this experiment,the root mean square of absolute trajectory error of the monocular visual inertial system is 0.273 m on average.After fusing the depth information,the root mean square of absolute trajectory error is 0.163 m on average.
Keywords/Search Tags:visual odometry, IMU, SLAM, point and line features
PDF Full Text Request
Related items