Font Size: a A A

Outdoor Simultaneous Localization And Mapping Based On Laser-vision Data Fusion

Posted on:2021-03-29Degree:MasterType:Thesis
Country:ChinaCandidate:X T GuoFull Text:PDF
GTID:2428330620476889Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping(SLAM)can parallelly reconstruct the current 3-D environment and estimate the pose of the sensor in real time.It can be widely used in the field of intelligent robots,intelligent systems,virtual reality and augmented reality,etc.Vision system and lidar are the two most commonly used sensors in SLAM applications.The advantage of the visual sensor is that it can obtain rich scene information at a higher frame rate,but it is vulnerable to changes in lighting.Lidar can obtain accurate scene ranging information in 3-D environment in real time and can work day and night,but its data acquisition frequency is lower than that of a visual sensor and cannot provide information such as color and texture.In order to improve performance of a single sensor,this paper studies the problem of outdoor SLAM system based on laser-vision data fusion.In order to complete the data fusion of lidar and monocular vision,it is necessary to calibrate the two sensors,that is,to solve the external parameters between two sensors.Aiming at the above joint calibration problem,this paper proposes a joint calibration method of lidar and monocular vision.This method designs a regular hexagonal calibration board.By extracting corners in the laser point cloud and image,a matching pair can be established.Obtaining multiple sets of matching corner pair,calibration results can be obtained by solving PnP(Perspective-n-Points)problem.The experimental result shows that the calibration method proposed in this paper can meet the practical application requirements.Based on the existing ORB-SLAM2 system framework,this paper redesigned the laser-vision data fusion SLAM algorithm.To solve the scale missing of monocular,the lidar point cloud is projected onto the pixel plane to obtain the depth of the image feature points which can recover the scale of the system,thus providing more map points in the visual tracking step.In addition,taking advantage of the feature of lidar 360° scanning,the laser vision fusion system designed in this paper can detect closed loops that cannot be detected by vision.We can use this as a basis for closed loop correction to reduce the cumulative error of the system.The visual feature based method can only provide a sparse map,but the laser-vision fusion SLAM system mentioned in this paper can provide a dense map of the 3-D environment in real time.Firstly,the pose obtained by the front end is transformed in the map construction period.Then the transformed pose can be used as the initial value of the laser odometry,thereby effectively ensuring the accuracy of the 3-D mapping.In order to verify the effectiveness of the algorithm proposed in this paper,KITTI public dataset is used for comparative experiments.In the experiments,our algorithm are compared with three algorithms ORB_SLAM2,LDSO,and Lego-LOAM respectively.Compared with the current representative systems,the laser-vision data fusion SLAM algorithm proposed in this paper has better accuracy or comparability.
Keywords/Search Tags:Simultaneous Localization and Mapping(SLAM), 3-D LiDAR, Monocular Vision, Data Fusion
PDF Full Text Request
Related items