Font Size: a A A

Research On Multi-sensor Fusion Environment Perception Algorithm In Autonomous Driving

Posted on:2021-02-19Degree:MasterType:Thesis
Country:ChinaCandidate:Z Y ChenFull Text:PDF
GTID:2392330614966021Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the decline in the price of laser equipment,currently Autonomous driving has gradually transitioned from single-sensor recognition to multi-sensor fusion,but because the laser point cloud information is three-dimensional information,it has put forward higher requirements for the calculation and storage capabilities of recognition algorithms.It also faces many challenges such as point cloud data feature extraction and fusion network construction.In order to solve the problems of excessive point cloud data volume and difficulty in fusion of image and point cloud,this dissertation studies a sensor fusion recognition algorithm based on KITTI data set,which is used to classify and locate different targets and obtain its 3D recognition frame.First of all,this dissertation introduces the point cloud data related to sensor fusion algorithm in the KITTI data set and the evaluation criteria of the detection results.By combing the characteristics of point cloud data disorder,sparseness and limited information,a bird's eye view of point cloud data is proposed.Fusion of point cloud data height,density,intensity information,convert it into image information format for feature extraction.Through the introduction of the multi-sensor fusion algorithm scheme,the basic framework of AVOD(Aggregate View Object Detection network)sensor fusion network in this dissertation is selected,and it is divided into feature extraction and target frame selection modules for analysis and optimization.Secondly,for the feature extraction module,the traditional feature detection algorithm is aimed at the problem of weak feature extraction ability of laser point cloud data.This dissertation designs a feature extraction network based on octave convolution.The feature extraction network combines the octave convolution and multi-layer feature combination detection scheme,which reduces the demand for computing power,expands the target receptive field,and improves the detection of small targets.In addition,by using algorithms such as Leaky Re LU and batch normalization,the network quickly converges and reduces the occurrence of overfitting.Then,for the target frame screening module,the IOU(Intersection Over Union)criterion in the traditional target frame screening algorithm is simple,and it is difficult to judge the target direction,height,and length included in the 3D target frame.This dissertation designs and proposes a screening algorithm for the 3D target frame.The screening of the target frame is divided into 2D and 3D stages.In the 2D target frame screening stage,the 3DSoft-NMS(3DSoft Non Maximum Suppression)algorithm is used to reduce the factors caused by overlapping goals.Missed target.Inthe 3D target frame screening stage,the GIOU(Generalized Intersection Over Union)algorithm is used to design the 3D-IOU evaluation standard,and a new pose estimation loss function is obtained in this way,which improves the accuracy and efficiency of the 3D target frame screening and reduces The occurrence of overlapping phenomenon.Finally,aiming at the problems of weak detection ability of AVOD sensor fusion algorithm and small target screening accuracy,the above improvements are unified and integrated,and the OC3D-AVOD algorithm with better fusion efficiency is proposed.By comparing the experimental results and fusion of MV3 D and AVOD framework Experiments prove the superiority of OC3D-AVOD.
Keywords/Search Tags:Sensor fusion, driverless, point cloud data processing, target detection
PDF Full Text Request
Related items