Font Size: a A A

Research On Monocular Vision Localization Technology Based On Lidar Point Cloud Map

Posted on:2022-03-13Degree:MasterType:Thesis
Country:ChinaCandidate:B QiuFull Text:PDF
GTID:2518306539461774Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
In recent years,industries such as autonomous driving,autonomous drones,and intelligent robots have developed rapidly,and there have been a large number of mature applications in the service industry,agriculture,and medical care.Positioning technology is one of the most basic requirements for robots to complete related tasks,mainly based on Lidar and camera methods.A single sensor can no longer meet the needs of actual application scenarios.Combining the advantages of lidar and camera,using the respective advantages of lidar and camera can realize a positioning method with higher accuracy and stronger stability.Based on the Direct Sparse Odometry(DSO)algorithm based on the sparse direct method,this paper proposes a monocular vision positioning method based on a priori lidar point cloud map.This method uses the distance and plane information of the Lidar map to realize a monocular vision positioning method with absolute scale.The algorithm was tested experimentally on the public data set EuRoC,and the results showed that it can run stably even in scenes with obvious lighting changes,and the positioning accuracy is better than the current state-of-the-art monocular vision positioning algorithm.The main research contents are as follows:(1)Aiming at the cumbersome process of external parameter calibration of lidar and camera,an automatic external parameter calibration algorithm based on edge features is proposed.First process the image data,first extract the edge,and then perform the inverse distance transform processing to obtain the edge image with gradient.Secondly,the Lidar data is processed,and the points whose adjacent points are less than the threshold are filtered to obtain the edge point cloud.Project the point cloud to the preprocessed image again to construct the objective function,and optimize the solution to obtain the optimal external parameters.Finally,through a set of actual data tests,it is verified that this method can obtain higher-precision external parameters in any scenario.(2)Aiming at the problems of scale drift and low positioning accuracy of the monocular vision positioning algorithm,a monocular vision positioning algorithm based on Lidar point cloud map is proposed.First,the 3D point cloud map of the scene is estimated based on PCA(Principal Component Analysis).Then,the feature points of the image are matched with the points in the normal map,and the distance and plane information of the pixels are obtained.Finally,the distance of the map is used to initialize the monocular camera,and the plane constraint is added to the luminosity error function,and the camera pose with absolute scale and higher accuracy is obtained through nonlinear optimization.(3)In view of the problem that the illumination change has a bad influence on the pose estimation of the SLAM algorithm based on the direct method,the camera's luminosity calibration is added to the visual positioning algorithm,so that the positioning algorithm can run stably in scenes with obvious lighting changes,and improve the monocular The stability of the visual positioning algorithm.
Keywords/Search Tags:Multi-sensor fusion, External calibration, Monocular vision localization, Lidar point cloud map
PDF Full Text Request
Related items