Font Size: a A A

Autonomous Localization Method For Driverless Cars Based On Fusion Of Vision And Lidar

Posted on:2021-02-21Degree:MasterType:Thesis
Country:ChinaCandidate:L B MengFull Text:PDF
GTID:2370330614950183Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of technologies like computer vision,artificial intelligence and 5G communication,the development of self-driving technology has been fully supported and promoted.Simultaneous localization and mapping(SLAM)plays a crucial role in various application scenarios of the self-driving technology,and is the basis for driverless cars to realize functions such as environment perception,decision planning and autonomous motion.However,traditional SLAM methods based on standalone sensors have disadvantages in localization accuracy,mapping effect and system robustness.This paper proposed a novel autonomous localization method for driverless cars based on a tight coupling of the monocular vision and the laser point cloud.This method firstly preprocesses monocular images and lidar point clouds,then tightly fuses the 3D features of vision and lidar in a joint optimization framework,and finally output the global-consistent pose estimation and the 3D point cloud map of the traversed environment.The main research contents of this paper are as follows:(1)Data preprocessing for the monocular camera and the multi-line lidar.The input of this module is the images and point clouds with consistent frequency after time synchronization.This module carries out the sparse optical flow tracking on the input image of current frame and extracts new 2D visual features to maintain a constant feature number.The point cloud of current frame is segmented to obtain a labelled point cloud including a ground point set and reliable object point sets,where unreliable outlier sets are eliminated according to their size.Then,the 3D laser features are extracted from the labelled point cloud.In addition,the labelled point cloud is associated with the 2D visual features to restore their depth and obtain 3D visual features.(2)Tightly coupled visual lidar odometry.Firstly,this module performs motion distortion removal on the 3D laser features of current frame,and the correspondences of all laser and visual features of the current frame are found in the distortion-removed point cloud of the previous frame.Then,the motion constraints between consecutive frames are constructed,and the tightly coupled pose estimation is performed under a joint optimization framework.The results of the tightly coupled pose estimation are further optimized in the laser mapping submodule,which outputs the fused pose and updates the 3D point cloud map at the frequency of 10 Hz and 2Hz respectively.(3)Loop detection and global pose graph optimization.The loop closure module detects visual loops using the bag-of-words model and detect vicinity loops based on the pose estimation of the odometry.When a loop is detected,we construct the loop constraint between the current loop keyframe and its history loop keyframe employing the ICP registration method.Then,the loop constraint is added to a 6-DOF global pose graph for optimization,and finally the loop-corrected global consistent pose estimation and environmental point cloud map are output.
Keywords/Search Tags:Autonomous localization, Driverless cars, Tightly coupling, Visual lidar odometry, Loop closure
PDF Full Text Request
Related items