Font Size: a A A

Research On Low-Beam LiDAR/INS/GNSS Integrated Navigation Algorithm For Vehicles In Urban Road Environment

Posted on:2024-04-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:T Y LiuFull Text:PDF
GTID:1522307292459754Subject:Communication and Information System
Abstract/Summary:PDF Full Text Request
Autonomous driving and intelligent transportation require accurate,reliable,continuous vehicle positioning in complex dynamic environments.However,the insufficient robustness and high cost of navigation systems are still the main obstacles to autonomous driving applications.The performance of existing LiDAR-based navigation solutions suffers from the interference of dynamic objects on the road in the urban environment,and the research on the feature description of low-beam LiDAR is insufficient.In this study,combined with the typical landmarks in the urban road environment and LiDAR bird’s eye view(BEV)feature points,a high-precision and robust integrated positioning scheme of low-beam LiDAR,MEMS INS,and GNSS is proposed.The main research and contributions of this study include:1.Low-beam LiDAR with a low vertical sampling rate makes it difficult to eliminate dynamic targets by extracting semantic information.The positioning strategy of using stationary signs in the road environment has been adopted.A real-time extraction algorithm of pole-like objects is proposed,and the design and implementation of the corresponding LiDAR Inertial Odometry(LIO)are completed.In those traditional low-beam LiDAR-based positioning algorithms,curvature-based feature points and the probability map are utilized for pose estimation,it is difficult to accurately detect and eliminate dynamic objects in the environment due to the insufficient spatial sampling rate of LiDAR,which is particularly serious in the urban environment.Considering the characteristics of a typical urban road environment,the pole-like objects and traffic sign features are carefully selected as reliable markers for LiDAR positioning.For this reason,this study proposes a real-time extraction algorithm of pole-like objects suitable for low-beam LiDAR point clouds.Clustering for obtaining the region of interest and features vectors computing is implemented.Then,the artificial neural network is trained to classify those candidates into a pole or non-pole.Compared with the threshold-based method,the accuracy of the proposed method is 91.3%,improved by 25%,and the false positive rate is only 10.3%,decreased by 30%.The extraction process can be carried out at 10 x the speed of real-time data flow.The field test evaluation was carried out in different seasons and different regions.Compared with the classical algorithms based on the feature point method(LIO-SAM)and probability map(FPG-LIO),the results show that this method’s divergence rate of plane position error is 0.16% in typical industrial park environment and 0.40% in the complex urban road environment.Compared with the classical algorithms,the proposed method has significant advantages in terms of robustness in scenes with many dynamic objects.Taking the FPG-LIO algorithm as an example,the plane positioning error is reduced by 70.4%,and the heading error is reduced by 84.3% in the urban environment.2.Aiming at the weakness in the feature description and data association of curvaturebased feature points extracted from low-beam LiDAR point cloud,this study proposes a feature point extraction and description method combining the deep learning network with BEV images of the point cloud.The classical feature point method extracts the feature point set with the largest or smallest curvature from the single row of the LiDAR scan line in neighbor without considering the vertically adjacent scan lines.The LiDAR BEV can project the information of different vertical scan lines into the horizontal plane and extract their features by the square convolutional kernel.The method in this study first takes the coordinates of the highest point in each grid as the grayscale of the corresponding pixel.It then takes the BEV as a 2D image as the input of the deep learning network to obtain the points with the strongest response and output their descriptor.The feature point matching experiment based on LiDAR odometry shows that the proposed method achieves higher accuracy than the classic LiDAR feature points and ORB features.In the test in a park environment by 16-line LiDAR,without loop detection and correction,the maximum position error,root mean square error,and standard deviation of the proposed algorithm are 48.7%,41.8%,and 51.9% better than the indications of the open-sourced algorithm MULLS.Even compared to the MULLS with loop detection and correction,the proposed algorithm’s maximum position error and positioning standard deviation are still reduced by 27.3% and 19.7%,respectively.3.A typical pre-built map in the positioning task requires large data storage.It seriously affects the storage,transmission,and loading of the map.The feature points above are employed for lightweight feature map construction and matching.In the mapping process,high-quality feature points observed multiple times are retained.The spatial downsampling of feature points is carried out by limiting the minimum distance between the adjacent LiDAR keyframes.And the volume of storage is also reduced by saving each descriptor in semi-precision.In the test of map-based localization,the proposed method reached the same level of localization accuracy,and the map storage is reduced by 87.5% compared with Range-mcl,a classical open-sourced solution with excellent performance.In map-based localization,an observation angle check is proposed to enhance the robustness of matching.The observation angles of the feature points are saved in the mapping process,compared with the current observation angle during realtime navigation,and eliminate the matching pairs with an enormous difference.The experimental results show that this method can effectively reduce mismatches.The vehicle’s longitudinal error can be reduced by up to 64.1%,and the root mean square error of the plane position can be reduced by up to 56.1%.4.Combining the above robust front-end based on landmarks and the proposed global positioning back-end based on the lightweight feature point map,a low-beam LiDAR/ MEMS INS/ GNSS integration navigation system is designed and implemented.In the initialization process,high-accuracy GNSS positioning observations are utilized for velocity alignment to initialize the system in the global coordinate frame.The front end performs matching and local pose estimation by the landmark features extracted from the LiDAR point cloud.The obtained pose results are added to the graph optimization problem as measurement.The back end integrates GNSS positioning information and the matching results of the pre-built feature point map to achieve global high-precision positioning.When the GNSS observation is available and with high quality,the system can achieve centimeter-level positioning accuracy.Under the condition that the GNSS signal is denied and the pre-built map is available,the root mean square error of the transverse and longitudinal positions are 0.08 m and 0.10 m,respectively.When the pre-built map fails,and the GNSS signal is unavailable,the system degenerates to the LiDAR/INS odometry mode,which can maintain a low positioning error(as mentioned above,the positioning error rate on the plane is 0.16%-0.40%).In conclusion,this study aims at the requirements of robustness and high-precision positioning in autonomous driving in urban areas.It obtains robust front-end through pole extraction of low-beam LiDAR point cloud.And the ability of global high-precision localization is obtained by the feature point map in the back end.Combining the two modules complete the low-beam LiDAR/ MEMS INS/ GNSS navigation system.Multiple tests and analyses in several typical scenarios verified the feasibility of the proposed algorithms.The research can provide a low-cost,high-precision navigation solution for autonomous vehicles in urban road environments.
Keywords/Search Tags:Integrated Navigation, Low-beam LiDAR, MEMS IMU, Autonomous Driving, Feature Extraction, Graph Optimization
PDF Full Text Request
Related items