Font Size: a A A

Research On Monocular Visual SLAM Method Fusion Of IMU

Posted on:2020-05-04Degree:MasterType:Thesis
Country:ChinaCandidate:H HuangFull Text:PDF
GTID:2428330575485602Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In recent years,vision-based simultaneous localization and mapping has been widely used in mobile robots,autonomous driving,augmented reality and so on.Monocular-vision based SLAM has become a hot research field because of the advantages of low cost,light weight and long distance of sight.However,due to the defects of the monocular itself,monocular-vision based SLAM is not very effective in the absence of scene texture and fast movement.In addition,monocular camera cannot provide scale information.That defect of single sensor can be compensated by sensor fusion.Inertial measurement unit(IMU)and visual sensor are obviously complementary.IMU can provide scale information for monocular vision and pose estimation for fast motion,while visual sensors can effectively solve the problem of static drift.Fusing of camera and IMU can improve the accuracy and robustness of the monocular SLAM system.This paper proposes a monocular vision SLAM scheme that integrates IMU.The research work is carried out from the following parts:(1)In the aspect of corner extraction,a FAST(Features From Accelerated Segment Test)corner detection algorithm based on adaptive grid and Shi-Tomasi scoring is proposed for the problem of uneven distribution of corner points of FAST.This algorithm first meshes the image and extracts FAST corner points in each grid and selects strong corner points for tracking according to the Shi-Tomasi score.Then,the grid and FAST thresholds are adjusted based on the tracking effect to ensure that an appropriate and evenly distributed corner point is used for tracking.(2)In terms of back-end optimization,incremental Bundle Adjustment is used to improve the efficiency for the re-computation caused by the re-linearization of previously calculated and invariant state vectors when the normal equation is established for traditional Bundle Adjustment.When the camera observes a new landmark,it is considered that the constraint of the landmark has a greater influence on the recent camera pose,and the influence of the camera on which the landmark is not observed is negligible.Therefore,only the changes are updated during optimization,and the state vectors without changes are saved to avoid re-computation.(3)The monocular visual inertia SLAM system is constructed and implemented,including the following four modules.In the sensor preprocessing module,by tracking the corner points,the data is provided for the pose estimation of the camera.IMU sensor obtains the current pose estimation by pre-integration.In the initialization part,the structure from motion method is first used to estimate the camera pose through pure vision initialization.Then,the camera and IMU are jointly initialized to calculate the scale information of the system and IMU bias.The pose estimation of camera and IMU is tightly coupled based on nonlinear optimization;Loop closure detection is achieved based on the word bag model.Loop closure optimization can reduce the cumulative error and obtain better pose estimation.As a result,the accuracy of the whole SLAM system can be improved.(4)The algorithm proposed in this paper was tested in the EuRoc dataset and real environment.The experiment proves that the detection algorithm proposed in this paper can obtain the corner points with uniform distribution,and the tracking effect is better.Compared with the open source visual inertial SLAM system OKVIS,the results show that the system in this paper has a lower root mean square error of localization.
Keywords/Search Tags:SLAM, Inertial measurement unit, Sensor information fusion, Bundle Adjustment
PDF Full Text Request
Related items