Font Size: a A A

Research On Simultaneous Localization And Mapping Of Indoor Mobile Robot Based On Depth Vision

Posted on:2020-11-20Degree:MasterType:Thesis
Country:ChinaCandidate:X S ChenFull Text:PDF
GTID:2428330575973464Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In indoor situations where GPS signals cannot be obtained,how to effectively solve the indoor positioning and navigation problems of mobile robots has become a difficult and hot topic in the field of robot technology research,and the Simultaneous Localization And Mapping(SLAM)method provides a suitable solution to this problem.In recent years,the depth camera represented by Kinect can acquire color information and depth information of the scene at the same time,making the visual SLAM method based on depth camera gradually become an important direction of visual SLAM research.This paper focuses on the SLAM technology of indoor mobile robot based on depth camera.Firstly,the TUM data set is used to test the visual SLAM algorithm,and then the algorithm experiment is carried out on the mobile robot platform.The results prove that the algorithm can establish the map of indoor environment with excellent performance.The main research contents of this paper are divided into the following parts:Firstly,the model of the depth camera used in the research of this subject is introduced,including the coordinate system,coordinate transformation and the calibration method of the depth camera.Next,the relevant system variables involved in the visual SLAM process of the robot are analyzed,and the equations of motion and observation equations are described.At the same time,combined with the graph model,in the form of poses,it represents the real-time positioning and map construction process of mobile robots,laying the foundation for subsequent research.Secondly,the various modules of the depth camera-based visual SLAM algorithm are studied in detail.Considering the real-time performance of the overall SLAM algorithm,the ORB feature with extremely fast calculation speed is used at the front end of the visual SLAM,and the Hamming distance is used for matching.At the same time,the mismatch matching optimization mechanism is introduced to improve the accuracy of feature matching.When estimating the camera motion,using the depth information of the depth camera,the motion of the camera between the two frames of images is estimated using the 3D-2D method.In the back-end processing part,the loopback detection method is introduced.The visual word bag algorithm is used to loop back the pose of the robot during motion,and the estimation error of its pose is constrained.Then,the nonlinear optimization method based on pose map is introduced,and the g2 o library is used to solve the camera motion trajectory.At the same time,according to the data of the depth camera,the point cloud map is constructed and the algorithm is evaluated.Thirdly,for the many defects of point cloud map,the paper introduces the Octomap model,then the Octomap construction experiment is carried out based on the TUM image sequence.Based on the experimental results,the algorithm evaluation and analysis are carried out.Finally,the visual SLAM algorithm is transplanted to the mobile robot experimental platform to estimate the trajectory of the robot during the movement.At the same time,the Octomap map of the laboratory environment is constructed,and a good environment map form is obtained,which verifies the feasibility of the algorithm.
Keywords/Search Tags:Depth camera, Visual SLAM, ORB Feature, Octomap
PDF Full Text Request
Related items