Font Size: a A A

3D Model Reconstruction Of Indoor Scene Based On RGB-D Camera

Posted on:2021-04-14Degree:MasterType:Thesis
Country:ChinaCandidate:S FangFull Text:PDF
GTID:2428330611455087Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the rapid development of information technology,3D reconstruction technology of computer vision has been applied to many fields,which has promoted the continuous progress of virtual reality(VR),augmented reality(AR)and other technologies.RGB-D cameras play an important role in 3D reconstruction of indoor scenes,due to its advantages of light weight,convenience and high cost performance,.In this thesis,the key techniques in 3D model reconstruction of indoor scene based on RGBD are studied.The basic principles of 3D reconstruction of RGB-D camera include classical camera model,camera calibration,camera pose estimation,point cloud fusion and model generation.The current indoor 3D reconstruction algorithms based on RGB-D camera still have the problems of insufficient accuracy and poor stability under the condition that the angle of view changes greatly,the illumination changes a lot and the texture changes greatly.Based on the color and depth information of RGB-D camera,This thesis builds a 3D reconstruction system of indoor scene.The system can be used to build 3D model of indoor scene more robustly.The research focuses on the following:For the problem of poor alignment of RGB-D data,before the camera pose estimation,through the accurate calibration of the RGB-D camera,intrinsic parameters and external parameters of the RGB camera and the depth camera and the relative pose relationship between RGB camera and depth camera are obtained to complete the accurate alignment of RGB data and depth data,so as to lay an effective foundation for camera pose estimation by combining 2D and 3D features.Aiming at the problem of the low quality of the depth data of RGB-D cameras,the strategy of removing outliers and data points with an excessive angle between the main optical axis and the normal vector of the point cloud surface is adopted to remove noise points and error points,which is further conducive to the accuracy of the subsequent 3D feature extraction and matching.The traditional RGB-D camera pose estimation method based on the feature point method is not robust enough in the case of large angle of view,large illumination change and RGB-D image blur.In this thesis,3D feature descriptors are introduced to make full use of the RGB image texture information and the geometric information of the depth image,and an improved joint pose estimation method based on RANSAC to delete incorrect matching feature point pairs is proposed.In this thesis,3D feature descriptors are introduced to make full use of the texture information of RGB image and the geometry information of depth image,and an improved camera pose estimation method based on the combination of 2D-3D features and the deletion of incorrect matching feature point pairs by RANSAC is proposed.Through experiments on public dataset data,it is verified that the method can still perform feature matching robustly under the above complex conditions and complete the estimation of the camera's relative pose to achieve the registration of two-view point clouds.Finally,the above improved method is applied to the RGB-D camera-based indoor scene 3D reconstruction system constructed in this paper,which can realize the reconstruction of multi-view 3D models of indoor scenes robustly.
Keywords/Search Tags:3D reconstruction, RGB-D camera, Point cloud registration, Camera pose estimation
PDF Full Text Request
Related items