Font Size: a A A

Design And Implementation Of 3D Multi-view Scene Construction Based On Data Fusion

Posted on:2020-05-14Degree:MasterType:Thesis
Country:ChinaCandidate:D G PangFull Text:PDF
GTID:2428330590958202Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
3D scene construction refers to the technology of recording,storing and displaying scenes in the real world with specific data in a computer.Recently,with the rapid advancement of laser scanning technology,it has become easier and more efficient to accurately obtain the three-dimensional information of a real scene,and the twodimensional image provided by the camera contains the color texture information of the scene,and the two are combined for construction.Realistic three-dimensional scenes have broad prospects in the fields of intelligent robot positioning and navigation,geographic information systems,cultural relics protection,virtual reality games and television.The research content of this thesis is mainly divided into two aspects: precise registration of point cloud and image,and fusion of point cloud and multi-view images according to the registration result.First of all,this thesis divides the registration into camera calibration and joint calibration of the camera and laser scanner according to the calibration results of the camera.The camera calibration can determine the plane model of the calibration plate in the camera coordinate system,joint calibration through the RANSAC algorithm to extract the point cloud of the calibration plate plane model,and then optimize the distance from the point in the planar model point cloud to the corresponding plane in the camera coordinate system.The optimal transformation between the two sets of data is obtained,and the registration relationship between the point cloud and the image can be calculated.This method can effectively avoid the error of selecting three-dimensional points and has high registration accuracy.Then,in this thesis,the point cloud and the single-view image are effectively fused according to the linear difference.For the problem that the camera field of view is small,consider assembling the camera to the plane perpendicular to the rotation axis,and determining the angle of the rotation of the shaft through the circular grating.Obtaining a rotation matrix and a translation variable(without scale information)from the SIFT feature point matching information of adjacent images,and averaging the rotation angle information provided by the rotation matrix and the rotation angle provided by the circular grating can thereby bridge the cumulative error of the circular grating.The size of the translation vector can be restored by the camera assembly relationship to determine the registration relationship between the point cloud and the image in the new position.This method avoids the tedious process of joint calibration while fusion,and the orientation of the camera can be changed at any time through the turntable.Finally,the software and hardware system based on the above research and suitable for building 3D multi-views scenes is designed and developed.The point cloud and multi-view images are effectively integrated in the laboratory environment based on such equipment and principle,getting a better 3D colored point cloud model.
Keywords/Search Tags:3D laser point clouds, 2D image, Registration and Fusion, Multi-view images
PDF Full Text Request
Related items