Font Size: a A A

Research On Mapping Algorithm Of Multi-robot Based On The Fusion Of Lidar And Vision

Posted on:2021-02-09Degree:MasterType:Thesis
Country:ChinaCandidate:X WanFull Text:PDF
GTID:2428330605476683Subject:Control engineering
Abstract/Summary:PDF Full Text Request
With the improvement of technology informatization and intellectualization,the demand for intelligent mobile robots in various industries continues to increase,which also brings opportunities for development of Simultaneous Localization and aping(SLAM)technology.Currently,The two most commonly used SLAM algorithms are visual sensor or LIDAR sensor based,and the solutions based on sensor fusion to make up for their shortcomings are also the current research hotspots.As the development of single robot SLAM technology continues to mature,the research direction is also extended to multi-robot collaborative SLAM technology.Aiming at the tendency of SLAM algorithm,this paper studies and experiments on multi-robot collaborative mapping algorithm based on laser and vision fusion.This paper firstly selects and mathematically models sensors,and then according to the framework of SLAM,the front end,back end and loop detection are discussed and selected.Oriented FAST and Rotated BRIEF(ORB)is used to extract features,features matching and pose estimation of camera are also done for the visual front end of the RGB-D camera.For the front end of laser,Iterative Closest Point(ICP)is applied to matching point pairs.This paper mainly focus on the research of the sensors fusion and the Multi-robot collaborative system.For the back-end of sensor fusion,a united error function established by combining laser scan data and visual image data is proposed.It optimize the pose graph jointly.Arfter that,the Bag of Word(BOW)model and the rules of selecting keyframe is set,and the map points are updated according to the keyframe.Finally,a map format with grid and sparse point cloud is generated.At the same time,a three-dimensional dense point cloud map and an Octomap were also created for comparison.For the Multi-robot collaborative system,a Multi-robot distributed communication scheme is designed for member robots to transfer keyframes,which include the scan data,visual data and pose information.The keyframes are used to update the map points in the map processing module and to perform similar environments detection through the bag-of-words matching method.If succeeded in matching,the relative pose estimation and map fusion among the member robots are performed according to the keyframes.The experimental part in this paper includes the camera calibration of the RGB-D camera KinectV2.Then,the actual scene mapping experiment is performed on a single robot to obtain a grid and sparse point cloud map,a three-dimensional dense map and Octomap,and compared with single robot laser algorithm.The member robots are distributed for mapping,and the mapping process and results are analyzed.Finally,compared with the pure laser multi-robot algorithm on the global map and absolute trajectory error,and the experimental results indicate that the fusion of submap roughly meets the expected goals.
Keywords/Search Tags:RGB-D camera, 2D LIDAR, Sensors Fusion, Multi-robot Collaboration, Map Fusion
PDF Full Text Request
Related items