Font Size: a A A

Design And Implementation Of SLAM System Based On Lidar And Vision Fusion

Posted on:2022-10-24Degree:MasterType:Thesis
Country:ChinaCandidate:P ChenFull Text:PDF
GTID:2518306557971199Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of the mobile service robot industry,its application scenarios have become more complex and diverse.Simultaneous positioning and map construction with a single sensor is difficult to meet application requirements.The use of multiple sensors for accurate positioning and mapping has become a research in this field.In order to overcome the technical problems of the two-dimensional lidar and depth camera sensors in the SLAM field,this thesis combines the advantages and disadvantages of the above two sensors to study and implement the SLAM algorithm based on the fusion of lidar and visual information.Firstly,aiming at the problem that the fixed number of sampling particles in the traditional Gmapping algorithm leads to the waste of system resources and the low accuracy of map construction,this thesis proposes a variable particle number algorithm based on Gmapping.This method can dynamically adjust the number of sampled particles in the system according to the complexity of the environment.Through a large number of experiments to find the correspondence between the complexity of the environment and the number of sampled particles.The simulation in MATLAB shows that the motion trajectory of the mobile robot in the improved Gmapping algorithm is closer to the real value,which improves the positioning accuracy.Secondly,based on the open source ORB-SLAM2 algorithm,a real-time grid map construction method is proposed.The algorithm obtains a map for navigation by constructing an inverse sensor model combined with an occupancy grid map model,so that the mobile robot constructs a clear and accurate grid map in the real meeting room scene.Finally,a SLAM scheme combining lidar and depth camera is proposed.The data fusion of this method adopts Bayesian estimation,and the local map constructed by fusion of ORB-SLAM2 algorithm and improved Gmapping algorithm.On the KITTI data set,the maximum total cumulative error of motion max: 25.518332,mean: 12.312251,the positioning effect is significantly better than the mainstream single sensor method;the comparison experiment between single sensor and fusion SLAM is carried out in the actual indoor environment,and the fusion SLAM scheme gets a more accurate grid map,which verifies the effectiveness and practicability of the fusion scheme,and meets the expected design requirements.
Keywords/Search Tags:SLAM, Data Fusion, Lidar, Depth camera
PDF Full Text Request
Related items