Font Size: a A A

Object Detection In Automatic Driving With Multi-sensor

Posted on:2020-12-17Degree:MasterType:Thesis
Country:ChinaCandidate:Y P LiaoFull Text:PDF
GTID:2392330596476191Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
In the field of automatic driving,object detection is essential.However,object detection relying on a single sensor cannot be very accurate in complex traffic scenes.Therefore,it is particularly important to study multi-sensor fusion.This thesis studies the data of camera,radar and lidar based on deep learning,and researches on object detection for autonomous driving based on multi-sensor.The specific content of this thesis is as follows:This thesis studies a 2D object detection method based on image preprocessing network.To solve the problem that camera posture is changeable in autonomous driving,this thesis introduces transfer learning,and studies an image preprocessing network based on adversarial pre-training.On the one hand,the camera data is used to pre-train the feature extraction network.On the other hand,this thesis constructs a multi-view object detection dataset,which improves generalization ability of the deep learning model.This thesis studies a method of camera and radar target-level fusion based on optimal properties.In order to combat the problem that object detection which uses a single sensor is incomplete,this thesis introduces the optimal properties based on Hungarian algorithm and Kalman filter algorithm,and realizes the fusion of camera data and radar data.This method makes use of the detection and classification ability of the camera and the robust stability of the radar,and builds a multi-sensor object detection dataset,which improves the detection accuracy of the camera for long-distance object.Moreover,this thesis effectively improves the detection accuracy in the 3D space.In a word,the thesis enhances the efficiency of camera and radar fusion,and provides a reliable prediction of the object’s acceleration.In this thesis,a feature-level fusion algorithm between camera and lidar based on deep learning is studied.By employing the shared object detection network and the fusion network,the problem that image features and fusion features are repeatly extracted is addressed.Moreover,two deep learning feature extraction networks are proposed respectively,because of tremendous difference between camera data and lidar data.Thanks to the deep fusion,object candidate regions are generated.Finally,the results on the KITTI datasets show that the method improves the detection accuracy of the single sensor and reduces running time of object detection as well,which applies feature-level fusion of multi-sensor.Finally,the research on multi-sensor fusion algorithm at data-level is carried out.This thesis studies a data-level fusion method of camera and lidar and the feature extraction of point cloud data.To solve the problem that lidar data are very sparse,a multi-scale and multi-dimension feature extraction network is proposed.In addition,for the reason that lidar points are disordered,this thesis studies a voxelization method,and employs 3D convolutional neural networks to achieve the end-to-end fusion.The data-level fusion on multi-sensor can more effectively exert the learning ability of the neural network,and improve the recall of 3D object detection.
Keywords/Search Tags:Autonomous driving, deep learning, object detection, multi-sensor fusion
PDF Full Text Request
Related items