Font Size: a A A

A Research On Road Extraction Method Based On Point Cloud And Image Fusion

Posted on:2022-01-26Degree:MasterType:Thesis
Country:ChinaCandidate:Y L DuFull Text:PDF
GTID:2518306500451504Subject:Photogrammetry and Remote Sensing
Abstract/Summary:PDF Full Text Request
As one of the key and basic technologies to realize autonomous driving,environmental perception has become a new research hotspot in recent years with the rapid development and application of automatic driving technology.By perceiving the driving environment,it can provide decision-making reference information for vehicle navigation and control,and route planning based on the surrounding traffic information.On the other hand,it can help the vehicle achieve high-precision positioning based on vehicle environment information and high-precision map matching.As the drivable area of the vehicle,accurate and robust extraction of road is the basis for the route planning and navigation of the vehicle.In addition,the road area can provide an important reference for tasks such as lane line extraction and obstacle detection.Therefore,road extraction is an important part of environmental perception and an important basis for the realization of autonomous driving.By carrying a variety of sensors to actively and passively detect the driving environment,the car can observe the surrounding environment of the vehicle like a human,so as to obtain the vehicle's own state and the traffic state of the driving environment.This is the main method for autonomous vehicles to realize environmental perception in current.Due to its low cost,large amount of image information,rich texture,and wide range of scenes,the camera is currently the main sensor for vehicle environment perception.However,for some areas with poor lighting conditions or severe shadows,the visual perception stability of the camera will be greatly affected.The LIDAR can directly obtain the threedimensional coordinate information of the object and is not easily affected by conditions such as light.It has gradually become an important sensor in environmental perception and can be used as a useful supplement to the camera.Therefore,this paper combines LIDAR and camera to extract roads from the driving environment by fusing point clouds and images to improve the robustness and accuracy of road extraction.In this paper,there are three aspects for extracting roads from the driving environment based on the fusion of laser point clouds and images.The first is the extrinsic parameter registration of the camera and the Lidar,which provides the basis for point cloud and image data fusion;the second is road extraction based on region growing fusing point cloud and image data;the third is the construction of a deep convolutional neural network to realize the multi-level fusion of point cloud and image data.The specific research content is divided into the following three aspects:(1)Extrinsic parameters calibration for Camera and Lidar.The data collected by camera and LIDAR are based on their respective sensor coordinate systems.In order to achieve the fusion of the two data,it is necessary to calibrate the two data and obtain the extrinsic parameters of the two sensors to fuse the data.This paper uses the checkerboard to study the calibration algorithm of the two sensors.First,the checkerboard is placed in the common observation area of the camera and the laser.The camera and the sensor are used to observe the checkerboard in different postures(at least three times in theory)at the same time.Checkboard plane constraint solves the extrinsic calibration parameters of the two sensors.The experiment of collecting actual data verifies the validity of the external parameter calibration algorithm in this paper.(2)Road extraction algorithm based on region growth.Based on the consistency of pavement material,roads are generally shown as connected areas with similar colors in the image.However,because visual images are easily disturbed by light and shadow,the color of road surface may change and affect the accuracy of road extraction.The road extracted by laser point cloud alone is discrete and has the characteristics of sparsity in far area.Therefore,we use point cloud to assist the road extraction in image.The point cloud assisted image road extraction mainly includes two aspects: one is to provide seed points for the region growth of the image;the other is to provide the road range for the image to avoid the over-growth of the image.(3)Construction of deep learning semantic segmentation network for multi-level fusion of point cloud and image.Because the point cloud is sparse,discrete,and irregular data,point cloud and image fusion are usually processed by projecting the point cloud onto the image.Based on the above-mentioned camera and LIDAR extrinsic parameters calibration,we project the point cloud data to the image plane,and then construct a semantic segmentation network to realize road extraction.In order to fuse point cloud and image data more effectively,the adaptive module of point cloud is designed firstly,and then a multi-layer fusion network is constructed according to the existing fusion methods of point cloud and image data.Multiple connection modules are built between the feature extraction network based on image and the feature extraction network based on point cloud,so as to realize the deep fusion of the two data at multiple levels in the network.Finally,the semantic segmentation results are resolved by combining the pyramid pooling of PSPNET,so as to extract the road accurately.
Keywords/Search Tags:extrinsic parameters calibration, road extraction, region growing, multilevel fusion, semantic segmentation
PDF Full Text Request
Related items