Font Size: a A A

Research On Visual Simultaneous Localization And Mapping Of Mobile Robot Based On Depth Sensor

Posted on:2018-01-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y LiFull Text:PDF
GTID:2348330533461072Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Simultaneous localization and mapping is one of the focus in the field of mobile robot research,which is the key to achieve autonomous navigation and control of mobile robot in unknown environment.Also,it is the basis of mobile robot to finish other operational tasks.In recent years,due to the low cost and abundant collecting information,the development and application of visual SLAM technology has become a hot issue.In the research,as the representative of the depth sensor which can provide both color image and depth information of the scene,Kinect are widely used in visual SLAM.At present,SLAM project which using Kinect depth sensors are constructed by frontend of visual odometry and backend of pose graph optimization.In this paper,each part of visual SLAM is analyzed using Kinect1.0 as the environmental sensing sensor and using TurtleBot2 robot as the mobile platform.The simultaneous localization and mapping building system are achieved under the ROS system,and different types of indoor environment map can be built.The main work in this article includes the following sections:The hardware structure and function of Kinect depth sensor and TurtleBot2 robot are described in detail.Sensor and mobile robot driver installation and debugging are based on the ROS system development environment.At the same time,completing the robot assembly work and building the experimental hardware platform.Visual SLAM observation information acquisition methods are discussed in detail based on different types of visual sensors.Zhang Zhengyou's plane calibration method is used based on the theory of camera imaging model and camera calibration.The intrinsic and extrinsic parameters are obtained by calibration experiment which uses the MatLab calibration toolbox.For binocular vision system,the principle and the expression of image information are introduced in detail.The image is expressed by using the point features that meet the requirements of visual SLAM.Aiming at problems such as low accuracy,large amount of calculation and poor real-time performance in image feature matching algorithm,an improved SIFT image feature matching algorithm is proposed based on the research of feature extraction and matching algorithm.The feasibility of the algorithm is verified by feature extraction experiment.For Kinect sensor,the corresponding relationship between the RGB image and the depth image are introduced in detail.The observation information is obtained by traversing pixels based on the camera model.Finally,two kinds of observation information acquisition methods are compared in detail.Each part of the visual SLAM algorithm is discussed in detail based on the Kinect depth sensor.In this paper,we select ORB feature algorithm which has the best real-time for feature extraction in visual SLAM scheme,and then use FLANN matching method to match the features.To eliminate mismatching points,the matching points are optimized by the method included the minimum matching distance and RANSAC algorithm.In motion estimation part,the 3D-2D method is used to estimate the motion of the visiual sensor,which can effectively reduce the computational complexity of the system and avoid the problem that computing results falls into local optimum.In process of the backend,the frame structure is defined according to the image information.At the same time,the key frame selection mechanism is defined based on the motion estimation.Based on the key frame,the pose map is constructed by using the combination of close loop and random loop.After that,according to the position and orientation information,the pose map is optimized based on G2 O.Finally,the construction of point cloud map is completed according to the pose map which is optimized.Point cloud map has a large amount of data.Also,it is difficult to handle overlapping area and it is difficult to be used in robot path planning.To overcome the disadvantages of point cloud map,we use different map expression to build the scene map by data conversion of point cloud data.By coveting 3D information of point cloud to two dimensional fake laser information,we use map construction algorithm of laser data to construct the two-dimensional grid map and obtain the feasible region for the robot.The Octomap map is constructed by creating octree structure,which reduces the running time of the system and the amount of map data.Based on the Kinect depth sensor,the point cloud map?Octomap map and grid map building experiment of laboratory environment are carried out.Different types of environment maps are obtained by experiment,and the feasibility of the algorithm in article is verified.
Keywords/Search Tags:Kinect depth sensor, Visual SLAM, Feature extraction and matching, Map expression
PDF Full Text Request
Related items