Font Size: a A A

Research On 3D Lane Detection Method Based On Fusion Of Lidar And Camera

Posted on:2023-11-21Degree:MasterType:Thesis
Country:ChinaCandidate:H L YeFull Text:PDF
GTID:2532307097476694Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
With the development of sensor technology and autonomous driving industry,environment perception technology based on multi-source sensor fusion has developed rapidly.As an important part of the automatic driving environment perception system,lane line detection plays an important role in the safety of vehicles,especially in structured roads,only when the lane lines are correctly detected can the drivable area of the road and the relationship between the vehicle and the lane lines be determined.The relationship between them provides a good basis for subsequent decision-making and planning,and ensures the normal driving of the vehicle.However,most of the current lane line detection methods are based on a single sensor,which is greatly affected by the characteristics of the sensor itself,while the detection methods based on sensor fusion are relatively less studied and have problems such as low accuracy and large amount of calculation.At the same time,the existing lane line detection methods mainly focus on the 2D detection results of the lane lines,and lack of three-dimensional information such as road slope,while the 3D lane lines not only have accurate threedimensional lane coordinates,but also provide vehicles with complex road information such as slope or bumpiness.It can provide more abundant road information for maps and positioning modules,and promote the development of advanced autonomous driving.To this end,this paper takes the 3D lane line detection problem as the research object,and proposes a 3D lane line detection method based on the fusion of lidar and camera.The main work of this paper is as follows:First,reliable sensor calibration parameters are a necessary condition to achieve accurate multi-sensor fusion.In order to obtain accurate multi-sensor joint calibration parameters,a lidar and camera joint calibration framework and model are constructed.Through the analysis of the multi-sensor joint calibration theory and calibration parameters,the joint calibration function that needs to be realized and the calibration parameters to be acquired are determined;the multi-sensor time synchronization is realized by the combination of hard synchronization and soft synchronization;based on laser The installation position of the radar and the camera and the feature matching relationship between the two data realize the initial external parameter calibration and accurate calibration of the multi-sensor space.Finally,through experimental analysis,it is verified that the joint calibration framework and model constructed in this paper can obtain accurate calibration parameters,which can ensure the accurate fusion of lidar and camera data.Secondly,based on the joint calibration results,an image-based lane line segmentation and candidate point cloud extraction method is constructed.The former simplifies the lane line detection problem into a semantic segmentation problem,and builds a lane line segmentation network based on Bise Net-v2;for the lack of lane labels in the existing semantic segmentation dataset,the existing lane line detection dataset is converted by pixel interpolation.To segment the dataset,and train the lane line segmentation network with the processed dataset.The latter uses the joint calibration results to perform projection transformation on the lidar point cloud,and obtains the lane line candidate point cloud by extracting the correlation between the projection map and the lane line segmentation reference result map.Then,combined with the candidate image-based point cloud information,a multifeature gradient-based point cloud extraction method and a 3D lane line fusion detection method are constructed.Aiming at the problems of low accuracy and many false detections of the existing Li DAR-based lane line detection methods,a lane line point cloud extraction method based on the multi-gradient features of point cloud height and reflection intensity was constructed,and the lane lines were extracted from the original point cloud.Secondary candidate point cloud;for the problem of inconsistent distribution characteristics of point cloud and many noise points,point cloud interpolation and filtering methods are constructed,and the fused candidate point cloud is filtered,interpolated and clustered respectively,and each point cloud is extracted.The 3D point cloud cluster corresponding to the lane line;for the problem of 3D lane line segmentation and poor fitting effect,a 3D lane line fitting method based on multi-line segments is constructed,and the differential evolution and Bayesian optimization problem are used to realize the lane matching.Adaptive segmentation of line point cloud,and use the segmentation results to achieve lane line fitting,and obtain accurate 3D lane line geometry information.Finally,the feasibility and accuracy of the detection method proposed in this paper are verified on the kitti autonomous driving dataset.Several sets of experiments have been conducted on data in different scenarios.The experimental results show that the3 D lane line detection method based on lidar and camera fusion proposed in this paper can accurately and effectively identify and extract 3D lane line information.The entire lane line detection framework And the system has high application value.
Keywords/Search Tags:Autonomous Driving, Lane Detection, Lidar, Computer Vision, 3D Lane
PDF Full Text Request
Related items