Font Size: a A A

Research On SLAM Method Based On Lidar And Vision Fusion

Posted on:2022-03-22Degree:MasterType:Thesis
Country:ChinaCandidate:C G GongFull Text:PDF
GTID:2518306743963049Subject:Detection Technology and Automation
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping(SLAM)has always been a hotspot in robotics research,as well as a key point and difficulty in technology.The robot needs to obtain information about itself and the surrounding environment through sensors such as lidar,vision,and IMU when positioning and mapping.However,when only single-line lidar is used,only a single height of environmental information can be collected.The SLAM method based on visual sensors has disadvantages such as easy loss of features when the robot moves too fast,and greater impact by environmental lighting,resulting in insufficient map information.In response to the above problems,this paper uses laser and vision sensor information as input,and a self-made indoor mobile robot as a platform,and proposes a robot SLAM method using fusion information.The research content of this paper mainly includes the following aspects.Firstly,the overall design of the robot is carried out according to the requirements,the robot motion model is analyzed,and the 3D structure of the robot is designed according to the motion model.The communication protocol between the robot base and the host computer is formulated,and the development of programs such as robot motion control and data communication is completed.Secondly,the current single-line lidar feature extraction method is analyzed.Aiming at the problem of noise sensitivity and poor robustness of the existing method,a single-line lidar feature extraction method based on a look-ahead window is proposed.First,the bilateral filtering method is used to filter the single-line lidar point cloud,and then Harris corner extraction is performed on the filtered point cloud,and finally the features in the point cloud are extracted using the forward-looking window feature extraction method.Thirdly,the realization principle of the current commonly used line segment feature extraction methods is analyzed.In order to improve the real-time feature extraction,an improved PPHT(Progressive Probabilistic Hough Transform,PPHT)feature extraction method is proposed and improved.The PPHT extracts the horizontal direction features in the image,and finally converts the extracted two-dimensional line segment features into three-dimensional space through the depth image.A comparative experiment is carried out in the actual environment to verify the real-time performance of the method and the results of 3D feature point extraction.Finally,a classification and fusion framework for laser and visual information is designed.The extracted laser and visual features are classified and matched,and then the characteristic line segments are merged according to the matching results to obtain the fused point cloud information.After that,the fused point cloud information is used as the observation and the improved RBPF is used to construct a raster map,and a map with rich information and perfect features is obtained.Finally,the system is verified by experiments to verify the feasibility of the method.
Keywords/Search Tags:SLAM, Robot, Lidar, Depth camera, Data fusion
PDF Full Text Request
Related items