Font Size: a A A

Simultaneous Localisation And Mapping Based On 3D Vision For Mobile Robots

Posted on:2019-01-16Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q H YuFull Text:PDF
GTID:1368330611493070Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Visual Simultaneous Localisation and Mapping(Visual SLAM)is the basic method for mobile robots to achieve self-localisation and sense the environment.With the development of 3D vision camera(RGBD camera),3D-vision-based SLAM has become an active research field.Because of the multiple kinds of information in 3D vision,it requires new technologies and methods to well organize and integrate these information in the SLAM system.This thesis focuses on the three key issues in 3D visual SLAM,which are the visual feature extraction,the visual odometry,and the loop closure detection,and based on these research achievements,the entire 3D vision SLAM system is desigened.In the research of these key issues in 3D visual SLAM,we focus on solving the problem of reasonable integrating the different kinds of information together.Visual feature extraction is the basis of visual SLAM.This thesis proposed a new perspective invariant feature for RGBD images,called the PIFT(Perspective Invariant Feature Transform).The PIFT feature integrates the color and depth information together making full use of the intrinsic characteristics of the two kinds of information.This two kinds of information are employed in different procedures of the feature extraction,and the feature descriptor is extracted with the robustness to changes of views.The PIFT feature is successfully integrates the color and depth information together,which is beneficial for the accuracy of the pose estimation in the SLAM,and has good real-time performance at the same time.The visual odometry is the front-end of visual SLAM,which incrementally provides the localisation of the robot.This thesis proposed a new RGBD visual odometry based on the integration of hybrid information residuals,called HRVO(Hybrid-Residual-based Visual Odometry).HRVO is the RGBD visual odometry method which integrates three kinds of different information into a joint optimization framework,including the reprojection information,the photometric information,and the depth information.With the integration of the three kinds of complementary information,the accuracy and the robustness of the visual odometry are improved.The loop closure detection is the back-end of the visual SLAM,which is an effective way to eliminate accumulative errors of visual SLAM,and it is also the ensurence of the consistency of the constructed map.This thesis proposed a new probabilistic loop closure detection method based on the integration of the pose information and the appearance information,called PALoop(Pose-Appearance-based Loop).The PALoop integrates the pose information provided by the odometry and the appearance information of the image together,considering the different intrinsic characteristics of the two types of information.With the integration of the two kinds of complementary information,the performance of the loop closure detection is improved and good real-time performance is achieved as well.Based on the above researches,this thesis combines them together and introduces the back-end of map construction and global optimization,and the complete 3D visual SLAM system is designed,which is called the HI-3DVSLAM(3D Visual SLAM with Hybrid Information).The HI-3DVSLAM can achieve the real-time and accurate pose estimation for mobile robots in both the indoor and outdoor environments,and can construct a dense3 D map with the good ability to represent the real environment.
Keywords/Search Tags:3D vision, simultaneous localisation and mapping, multi-information fusion, perspective invariant feature transform, hybrid-residual-based visual odometry, pose-appearance-based loop closure detection
PDF Full Text Request
Related items