Font Size: a A A

Information Fusion Based 3D Reconstruction In Driving Environment

Posted on:2021-05-14Degree:MasterType:Thesis
Country:ChinaCandidate:Z P ZhuFull Text:PDF
GTID:2428330626456038Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
The three-dimensional(3D)structure of driving environment is the basic information for automatic driving system to identify and locate the target.The 3D reconstruction algorithm based on visual data has the advantages of low cost and dense reconstruction results,but it cannot obtain accurate reconstruction results in extreme weather,such as rain and fog.The reconstruction algorithm based on LiDAR has high robustness,but LiDAR is expensive and data is sparse.Therefore,it is of great significance to study the 3D reconstruction algorithm based on the fusion of visual data and LiDAR data in extreme environments.The basic unit of visual data is pixel,which records the color and texture information of the scene.LiDAR data takes 3D points as the basic element and records the spatial position information of the scene.How to fuse two different kinds of data effectively is a challenging problem.Therefore,this thesis studies data of LiDAR and camera based on deep learning,and researches on 3D point cloud reconstruction technology based on feature fusion in driving environment.The specific contents of the thesis are as follows:In order to solve the problem of insufficient accuracy of depth estimation algorithm based on single sensor in automatic driving environment,this thesis studies a depth estimation algorithm based on data fusion,and builds a feature fusion architecture based on visual data and LiDAR data.This architecture extracts the features of the two kinds of data respectively,and fuses them in the back end,which solves the problem of the fusion of visual data and LiDAR data,realizes the fusion of the two kinds of data on the feature layer,and improves the accuracy of depth estimation in extreme environment.To combat the problem of high computational complexity of depth estimation algorithm based on data fusion,this thesis studies a fast depth estimation algorithm based on feature transformation,and constructs a feature transformation network based on video redundancy.This network transforms the key frame feature into the current frame feature,which solves the problem of video redundant information repeated processing and realizes the real-time depth estimation of video.To solve the problem of high noise of reconstructed point cloud,this thesis studies a point cloud reconstruction algorithm based on depth information,and builds a point cloud smoothing network based on multi-task learning.This network uses auxiliary branches to smooth the results,which reduced the noise of reconstructed point cloud,and realizes dense point cloud reconstruction in the driving environment.Finally,the algorithm is validated on Virtual KITTI data set.The results show that the accuracy of depth estimation algorithm based on feature fusion is 95.83% in dense fog environment,which is about 1% higher than that of the direct fusion algorithm.The fast depth estimation algorithm realizes real-time processing of video with less loss of precision.At the same time,the point cloud reconstruction algorithm realizes dense 3D reconstruction.
Keywords/Search Tags:autonomous driving, depth estimation, 3D point cloud reconstruction, LiDAR, information fusion
PDF Full Text Request
Related items