Font Size: a A A

Large-scale Scene Localization And 3D Mapping Of Robots

Posted on:2021-03-31Degree:MasterType:Thesis
Country:ChinaCandidate:W L ZengFull Text:PDF
GTID:2428330602471887Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Although the robot's Simultaneous Localization and Mapping(SLAM)technology has achieved many exceptional research results,the accuracy,running speed and stability in large-scale(static or dynamic)scenes need to make improvements.In addition,sparse point cloud maps are not suitable for 3D visualization of actual scenes and robot navigation or obstacle avoidance.Aiming at the above problems,in this paper,a SLAM algorithm based on the combination of monocular and IMU is proposed to the large-scale scene of robot.Firstly,the calibration algorithm is used to calibrate the monocular camera,IMU sensor,and the relative pose between them,and the calibration is verified by experiments.Secondly,we use a uniformly distributed feature extraction algorithm to perform ORB feature extraction on consecutive image frames and save feature points based on the structure of the quadtree.For feature matching,this paper proposes an algorithm that combines the sliding window model with the GMS algorithm,which eliminates the influence of moving objects in the image and obtains the correct ORB feature matching point pair in the static region.Then,the point pairs are matched according to the obtained correct features,and the pre-integration information of the IMU sensor is combined to realize the rapid initialization of monocular vision.The IMU pre-integration information is tightly coupled with the monocular vision initialization result,and the absolute scale information of the monocular vision is restored to realize the system initialization.According to the initialized 3D sparse point cloud,the pose tracking of the current frame is achieved through reprojection error and the pre-integration information of the IMU,and the keyframes selection are performed.In the local map module,the static ORB feature matching point pairs obtained between consecutive keyframes are used to add static 3D map points to the local map.The map points and keyframes in the local map are used to implement closed-loop detection and relocalization during robot motion.Finally,stereo matching is performed on the image information of two consecutive keyframes to obtain the corresponding disparity image.Then,based on the relative pose relationship between keyframes,the disparity image is converted to a 3D point cloud in world coordinates,and it is meshed to generate a global 3D dense map for scene visualization.Experimental verification is performed on the public TUM,KITTI,Euroc datasets and actual scenes.The results show that the algorithm in this paper can obtain the correct feature matching point pairs of static areas in the scene.At the same time,fast and stable initialization of the system is achieved,and the monocular vision initialization time is 1.7s.In terms of robot accuracy and real-time,the accuracy on the Euroc dataset can reach0.039 m,which is superior to the VINS-Fusion and VI-ORB algorithms.The accuracy of the actual scene is 0.1% ~ 0.2%,so it has high accuracy..In addition,the algorithm can build a3 D dense map in the dataset and the actual scene to achieve the visualization of 3D scene.
Keywords/Search Tags:Large-scale scene localization and mapping, ORB feature extraction, GMS and sliding window model feature matching, System initialization, 3D dense map
PDF Full Text Request
Related items