Font Size: a A A

Simultaneous Localization And Mapping Based On Lidar And Camera

Posted on:2021-03-25Degree:MasterType:Thesis
Country:ChinaCandidate:S WangFull Text:PDF
GTID:2370330605475928Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Recently automatic drive and intelligent logistics are in the ascendant.As the fundamental theory,simultaneous localization and mapping(SLAM)is the prerequisite of these intelligent task.Light detection and ranging(Lidar)and camera are widely used in this field,which can acquire and process the surroundings information.At the same time,these SLAM methods can calculate the device's motion trace and the rebuilding map of its surroundings.There are many feasible Lidar and visual SLAM methods which perform well in many application scenarios.However there remain many problems such as accumulating error in visual SLAM and error arising from constant angular and linear velocities model in Lidar SLAM.These problems need to be solved further.Therefore,carrying researches on combing Lidar and visual data to perform SLAM to reduce or eliminate respective withdraws is of high value.For the problems in Lidar and visual SLAM,this paper focus on the relative pose calibration between Lidar sensor and camera as well as research on data fusion of lidar and visual SLAM.The main contributions are as follow:1.This paper proposes an algorithm using two coplanar circles to calibrate the relative pose between depth sensor and camera,which can be divided into two methods.One is based on corresponding points constraint,the other is based on point-on-plane constraint.The former involves a calibration board containing two coplanar circles.The relative pose between camera and calibration board can be estimated by using projective invariance,therefore the circle center coordinates in camera coordinate system can be acquired as well.Using the scale invariance of depth sensor,the circle center coordinates in depth sensor coordinate system can be calculated.With the corresponding point in each frame,the relative pose between depth sensor and camera can be refined.The latter method is based on point-to-plane constraint.Computing the parameter of calibration board plane in camera coordinate and extracting points on the board from depth sensor data,the relative pose can be estimated under the constraint that the distance of a point to its corresponding plane is zero.The simulation and real data experiments proved that this method has high accuracy and robustness comparing with existing methods.2.This paper has studied the existing single line laser SLAM and visual SLAM combined with depth information provided by Lidar sensor,implemented these algorithms and analyzed their shortcomings by real data experiments.A new algorithm is designed,in which the Lidar odometry is used to refine the Lidar-visual odometry further and rebuild the surroundings map at the same time.Comparing with the two algorithms mentioned above,this algorithm alleviates the accumulating error question in visual SLAM and solves the local point cloud distortion problem arising from constant angular and linear velocities model in lidar SLAM,which acquires more accurate results of motion trace estimation and remapping of the surroundings.
Keywords/Search Tags:sensors relative pose calibration, Lidar and camera calibration, simultaneous localization and mapping
PDF Full Text Request
Related items