Font Size: a A A

Spatial Reconstruction Algorithm Using Multi-source Image And Point Cloud Data Fusion

Posted on:2024-06-13Degree:MasterType:Thesis
Country:ChinaCandidate:S M HuFull Text:PDF
GTID:2568307157494404Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Three-dimensional space reconstruction is widely used in automatic driving,motion monitoring,security monitoring and other fields,and plays a vital role.A very important link in spatial reconstruction is environmental perception.Environmental perception is the use of sensors to obtain information about the surrounding environment,which is equivalent to human sensory organs.It is crucial for the subsequent analysis,processing and decision-making of computers.Because a single sensor can only obtain specific information,it has certain limitations and cannot meet the complex perceptual needs of the real world.Therefore,multi-sensor fusion reconstruction has become the focus of research.According to the required space scene,this paper studies the spatial reconstruction method based on multi-source image and point cloud data fusion.The main work contents are as follows:(1)In terms of image and point cloud fusion,previous studies mostly focused on the fusion of visible image and point cloud.In this paper,on the basis of the visible light camera increased the infrared camera,in the infrared image and visible light image fusion at pixel level,and then fused and reconstructed with the point cloud information to expand the application scene.(2)For the registration of multi-source images and point cloud data,this paper takes the visible camera as the intermediate object,firstly calibrates the visible camera and lidar,then uses the homography matrix to register the visible image and infrared image,and finally complete the multi-source images with point cloud data registration.Aiming at the time synchronization problem of multi-source data,this paper uses time timer of Matlab to collect data synchronously,and the time error between multi-source data in the same group is millisecond level.(3)Aiming at the problem of multi-source image and point cloud data fusion,this paper firstly carries out color fusion for visible image and infrared image,and then projected three-dimensional point cloud onto a two-dimensional plane for image fusion.In this paper,an image fusion method based on multistage potential low-rank decomposition and local energy is proposed.Infrared and visible images are decomposed into a base layer and multiple detail layers,and then fused and reconstructed respectively.Compared with six state-of-the-art algorithms,the method presented in this paper improves the contrast and sharpness of the fused images with excellent detail and texture.In addition,the fused image has clear outline and prominent target.In order to obtain the color information of the target,this paper uses RGB and YUV color space to convert the gray image after fusion into color image.Then,the information of the color fusion image is extracted,and the point cloud information projected onto the fusion image plane is assigned to obtain the multi-source fusion data,and the point cloud is then reflectively projected back into3 D space.To sum up,this paper independently builds a synchronous acquisition platform for the fusion of multi-source images and point cloud data according to the spatial scene requirements of automatic driving,obtains multi-source data and conducts data fusion and reconstruction,and finally obtains a spatial scene with color and texture,which can highlight the target and is suitable for complex environment.
Keywords/Search Tags:Joint calibration, Infrared and visible image fusion, Point cloud and image fusion, Point cloud registration
PDF Full Text Request
Related items