Font Size: a A A

The Environment Awareness And Autonomous Positing System Based On Lidar And Vision

Posted on:2020-01-24Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y OuFull Text:PDF
GTID:2392330590474498Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of driverless technology,there comes to more and more attention from the researchers to Simultaneous Localization and Mapping(SLAM)technology depends on sensor.The traditional SLAM technology depends on single sensor to perceive the environment,which tends to have many disadvantages in the complex condition.This paper has designed an environment awareness and autonomous positing system based on LiDAR and vision,which make the fusion of acquired laser point cloud with visual information,in this way,the depth information and the feature information of the visual data could be enhanced,so that an environment-aware framework with more accuracy could be built.This framework takes the advantages of LiDAR and visual sensor,and with better performance in the complex condition,because it reduced the algorithmic calculation,and improved the perception accuracy.This topic focuses on the content of the following three parts:(1)Single camera automatic calibration and multi-camera calibration without common vision.For the multi-camera system in this paper,in order to finish the calibration more quickly and with higher accuracy,we proposed a camera automatic calibration method,which take a photo with multiple checkerboards,and extract the checkerboards through the energy growth function so that it could take many calibration data in one picture.For the multi-came systems without common vision,we take the special calibration pattern full of SURF feature to create the pseudo-common view,and build graph optimization model,to optimize calibration results.(2)LiDAR and multi-camera joint calibration and optimization.For the joint calibration of LiDAR and camera,we have done it owing to the geometric constraint of LiDAR and multi-camera,by constricting the optimization model.With the result of multi-cam calibration and LiDAR-camera calibration,the external reference relationship between the laser radar and each camera is derived,and the obtained results are optimized by the LM method based on the minimization of the re-projection error.(3)The motion estimation based on LiDAR-camera fusion data.With the obtain of LiDAR-camera fusion data,we estimate the depth of feature points using laser information,so that the data association could be finished.Based on the geometric constraints between adjacent feature points,the motion estimation of the mobile platform is completed based on the PnP algorithm,and the local optimization is completed by using Bundle Adjustment method.(4)Backend optimization and loop-closing module.For the motion estimation of the front end,the key-frame selection and landmark selection strategy based on the fusion data is designed.With the estimation of the interpolation of the road sign depth,the robustness of the entire backend optimization algorithm has been Enhanced,and we have accomplished the global data optimization based on pose graph.Furthermore,we build a separate thread to construct the loop-closing module,which using the Bag-of-Words model to describe the image,and detecting the similarity between current data and historical data to achieve loop-closing detection.According to the detected loop key-frame,we consolidate and correct global maps,in this way the drift error would be eliminated.
Keywords/Search Tags:Camera automatic calibration, LiDAR-camera fusion and optimization, PnP motion estimation, Backend optimization, Loop-closing
PDF Full Text Request
Related items