Font Size: a A A

Research On Simultaneous Localization And Map Construction Based On Multi-sensor Fusion

Posted on:2022-07-05Degree:MasterType:Thesis
Country:ChinaCandidate:K ChenFull Text:PDF
GTID:2518306338973379Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping(SLAM)is the prerequisite and guarantee for mobile robots to realize autonomous navigation,and it is also a difficult point in the research of mobile robot navigation.At present,different SLAM methods adapt to different sensors and computing requirements.The development of laser SLAM tends to be mature,but it is vulnerable to the limitation of radar detection range and therefore loses map data points,while visual SLAM is vulnerable to changes in light and algorithms are exposed to several challenges including huge map size,perceptual aliasing and high computational cost.In recent years,domestic and foreign researchers have often used lidar and depth cameras for mapping to improve the accuracy,efficiency and viability of the mobile robot's mapping process.In this paper,multi-sensor fusion is based on encoder,IMU,Lidar,RGB-D camera,etc.This paper conducts multi-sensor data and algorithm fusion,and conducts corresponding theoretical and experimental research.The main research contents are as follows:First,the hardware framework of the mobile robot system was constructed,the robot kinematics model established by the mobile robot under Ackerman constraints was derived,the principle of the lidar model and the depth camera model in the observation model were introduced in detail,and the ROS environment of mobile robots and the three commonly used maps of mobile robots were briefly introduced.Secondly,four kinds of sensors were calibrated.Among them,the Unscented Kalman Filter(UKF)algorithm was used in the joint state estimation of IMU and odometer,and the simulation design was compared with the EKF algorithm.The results show that the fusion algorithm based on UKF has higher performance and can improve the positioning accuracy of the robot;depth camera and lidar were used for coordinate system conversion joint calibration and posture diagrams were used to optimize and correct drift,which can improve the accuracy of robot mapping.In the algorithm fusion,the 3D map created by the depth camera was converted into 2D pseudo laser data and the lidar data was fused.Based on the Cartographer algorithm,the 2D laser data scanning and matching genetic algorithm fused with NDT was used in the front end,the back end pose map was optimized and analyzed,the branch and bound algorithm was used for loop detection,which successfully draws and improves the accuracy of 2D raster map mapping.Finally,simulation and real environment mapping were carried out.In the established simple environment,two mapping methods of Gmapping and Cartographer were compared and the mapping effects of the two algorithms were analyzed.In addition,comparing Cartographer with the optimized algorithm in this paper,and testing the Deutsche Museum data set,the experiment verifies that the algorithm in this paper works better.Secondly,the depth camera simulation was used to construct a three-dimensional dense map and reduce it to two-dimensional,successfully constructing a raster map.Finally,the data was fused and verified on the physical platform.The experimental results show that the fusion algorithm in this paper can effectively overcome the shortcomings of the small amount of information in a single sensor,and the algorithm has high robustness.Figure[55]Table[8]reference[78]...
Keywords/Search Tags:mobile robot, SLAM, Data Fusion, RGB-D camera, Lidar
PDF Full Text Request
Related items