Font Size: a A A

Research On Data Fusion Method Of Lidar Point Cloud And Visible Light Images

Posted on:2021-07-08Degree:MasterType:Thesis
Country:ChinaCandidate:Z C HuangFull Text:PDF
GTID:2518306107952949Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
In recent years,the performance of computer vision sensors represented by visible light cameras and lidars has been improved year by year.Tech giants and scientific research projects in universities have been continually tapping the potential of their algorithms,making their application effects stronger and stronger,and their application fields becoming wider and wider.The data fusion technology with complementary advantages of these two environment-aware devices is also a hot field of scientific research.Autopilot technology and SLAM(Simultaneous Localization And Mapping)are the comprehensive use of these two mainstream sensors.However,lidar point cloud data has the characteristics of large amount,disorder,sparseness and unstructured,which makes it difficult to obtain good data fusion between point cloud data and visual images.In this regard,this paper studies the pixel-level data fusion technology of lidar point cloud and visible light images in natural scenes.The main contents of this paper are as follows:(1)Establish a data fusion operation platform using Velodyne-HDL-64 E lidar and Kinect v2 camera as information acquisition equipment,analyze the characteristics of lidar equipment horizontal,vertical resolution and stable periodic sampling,and establish horizontal scan line data as the experiment of this paper High-precision raw data,complete the intrinsic calibration experiment of the visual camera,and obtain the intrinsic matrix of the camera.(2)Research the Camera-Lidar joint extrinsic calibration method,complete the manual extrinsic calibration method for calibrating the board surface features and the automatic extrinsic calibration for the point features of the rectangular rigid body corner points in natural scenes,compare these two extrinsic parameters In the re-projection errorof the parameter calibration method,the pixel-level fusion of the RGB image and the point cloud data is performed with the extrinsic calibration result with smaller error.(3)Perform densification and depth completion operations on the obtained sparse fusion depth image.Rectangular kernel dilation used to solve the problem of data loss in the neighborhood and the densification of the depth image is completed.The ICP algorithm is used to superimpose adjacent frames to supplement the missing part of the lidar horizontal scan line data.For the sparse depth information in the vertical direction,the IP?basic algorithm is improved and the RGB image is used as a guide input to complete the depth completion of the dense depth image,and finally a smooth depth image is obtained.
Keywords/Search Tags:Lidar, Camera, Intrinsic calibration, Joint extrinsic calibration, Data fusion, Depth completion
PDF Full Text Request
Related items