Font Size: a A A

3D Reconstruction Based On Multi-sensor Fusion In Indoor Environment

Posted on:2019-05-16Degree:MasterType:Thesis
Country:ChinaCandidate:L Y LiuFull Text:PDF
GTID:2348330545996001Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of virtual reality and visual recognition,it is a challenging and far-reaching topic to develop a mobile robot system that has good scene understanding ability and can serve in complex environment.The traditional method of Indoor 3D reconstruction based on computer vision has the disadvantages of estimation accuracy,that easily affected by light and weather,no direct access to location information and poor robustness.The 3D point cloud collected by LiDAR equipment is lack of color support,and it is not convenient for the visualization of the terrain.Based on the above problems,this paper presented an indoor 3D reconstruction system based on multi-sensor fusion.The indoor 3d reconstruction system implemented in this paper mainly includes multi-sensor data fusion and global coordinate transformation.In the multi-sensor data fusion module,we first used chessboard calibration method to achieve the calibration of color camera in this paper,and used Levenberg-Marquardt method to solve the objective function quickly.In order to obtain 3D information more accurately,the geometric model of LiDAR is used to calculate the cloud solution in this paper,and the data acquisition is performed at different locations of the wall,and the nonlinear optimization of the LiDAR internal reference calibration is realized.In addition,the paper also demarcated the LiDAR external parameters in two parts:first,we compared the ground in the LiDAR coordinate system and the actual ground,and got the rotation matrix and translation matrix of the LiDAR coordinate system relative to the vehicle coordinate system's pitch angle and roll angle.Then compared with the dolly's yaw angle and translational change of calibrated fine rod in LiDAR coordinate system and gyroscope detection before and after matching,found the rotation matrix and the translation matrix caused by the yaw angle.Finally,the exact LiDAR parameters are obtained by combining the matrix obtained from the above two steps.A novel two-dimensional wall projection detection algorithm based on Hough transform principle in this paper,combined with Connected Component Labeling algorithm,achieving the four peak detection in parameter space.By calculating the corresponding points of two-dimensional rectangular coordinate system,the position of the wall in the scene is precisely located after the point cloud traversal and the parameter fitting.Based on the above research results of point cloud of the wall,a joint calibration device using two dimensional board and wall surface is proposed to make use of the common properties of both sides in this paper.Then get the normal vector matrix of the wall in the camera coordinate system and the distance matrix of the wall to the camera under multiple pictures.At the same time,the corresponding two matrices in the LiDAR coordinate system are calculated,and then the method of the projection transformation matrix between the two is solved,and the point cloud is accurately mapped to the video stream image.In the global coordinate transformation module,based on the proposed indoor positioning system,the IMU acquisition information is calibrated to realize the synchronous positioning of LiDAR by fusing information between LiDAR and gyroscope.This paper also studied the fast loading and parallel computing method of the cloud saving memory,and improved the real-time performance of graphic image processing in the 3D reconstruction process.Furthermore,this paper also studied the methods of point cloud saving,fast loading and parallel computing,which improved data loading speed and memory utilization,and improved the real-time performance of image processing in 3D reconstruction.Based on the above technology,this paper implemented the indoor 3D reconstruction based on multi-sensor fusion system,a color laser map is obtained after rendering the 3D point cloud with the image.The experiment verified the practicability of the calibration algorithm and the global positioning algorithm.These studies have laid the foundation for indoor virtual reality and information depth fusion technology.
Keywords/Search Tags:3D cloud points, monocular vision, multi-sensor fusion, indoor 3D reconstruction, GPU
PDF Full Text Request
Related items