Font Size: a A A

Research On Sensor Fusion Based Monocular Visual SLAM Method

Posted on:2018-02-25Degree:MasterType:Thesis
Country:ChinaCandidate:J WangFull Text:PDF
GTID:2428330566488168Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Visual Simultaneous Localization and Mapping(VSLAM)can estimate robot pose and landmark positions based on camera's images in unknown environments.Such localization method for autonomous systems in GPS-denied environments attracted much attention in recent years.This thesis studies monocular VSLAM methods with fusion of other sensors to deal with the disadvantages of monocular VSLAM such as scale ambiguity,and conducts experiments to validate the methods.The main work includes:1)A monocular VSLAM method for indoor planar robots is proposed.This method fuses monocular image and wheel odometry data in 6-degree-of-freedom optimization based SLAM framework.By setting Jacobian matrix and using Levenberg method,the method constrains the estimated poses of robot in the ground plane.What's more,the method creates and merges sub-maps to deal with the tracking failure problem in complicated environments.Experiments with indoor robots shows that,the method can achieve accurate localization results,and the constructed map is consistent in complicated environments.2)An optimization based tightly-coupled visual inertial navigation algorithm is implemented to fuse monocular image and IMU data,and we make the code open source.The implemented software is tested with an open dataset,and the results show that the localization accuracy is within about 10cm in the scenes with size of about 30m~2and300m~2.Based on the visual inertial navigation algorithm,we propose a method with inverse depth parameterization for landmarks to improve numerical stability.The outdoor experiment result shows that the improved method performs better in numerical stability in large scale environment with similar or even higher localization accuracy.3)We design and build a visual inertial navigation system with commercial off-the-shelf sensors.In the system the monocular camera and IMU data is temporal synchronized to fulfill the temporal synchronization requirements of visual inertial navigation algorithm.Multiple experiments are conducted in indoor and outdoor environments.In the experi-ments,the localization error is within 10m when getting back to start position after 550m handheld motion around a building,and the three-axis position errors are within 1m in20×20×20m scenes during the flight of a UAV.These experiment results demonstrate the effectiveness of our visual inertial navigation system.
Keywords/Search Tags:SLAM, sensor fusion, visual inertial navigation system
PDF Full Text Request
Related items