Font Size: a A A

3D Measurement Based On Light Field Imaging

Posted on:2021-07-28Degree:DoctorType:Dissertation
Country:ChinaCandidate:J L WuFull Text:PDF
GTID:1480306485456414Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
The effective perception of 3D information has a profound impact on human activities,and the 3D measurement technology has been widely used in industry and daily life.For example,in the field of space exploration,the binocular stereo measuring system is used to monitor the change of the spacecraft’s solar panel morphology,while the aircraft production line adopts 3D measurement technology to guide the assembly of components.As a new type of imaging technology,the light field imaging can capture the spatial direction and intensity information of light ray in a single shot,bringing many special characteristics that the regular cameras do not have,such as digital refocus,depth estimation,and light field editing.This makes it possible to integrate 2D and 3D measuring functions on a single imaging device.For such reason,it is well worth exploring the related technologies and schemes of performing 3D measurement using light field imaging.Based on the spatial direction and intensity information recorded by the light field,this paper mainly studies the theory and technology of 3D measurement through light field imaging.Compared with stereo imaging,the light field imaging has the advantages of dense and regular sampling,which promotes the derivation of new measurement methods for 3D measurement.Based on this theme,this paper focuses on the plenoptic camera which encoding the light field with a microlens array,and carries out the corresponding theoretical analysis,method research and experimental verification.The main research work is as follows:(1)The precise locating of the projection center of the microlens is the prerequisite for the implementation of high-quality light field decoding.For such reason,this paper proposes a new method to solve the disadvantages of the existing strategy,such as low accuracy,limited scope of application,and relying on artificial parameter setting.The experimental results show that compared with the existing methods,this method can perform automatically with higher precision,leading to wider application scope.(2)During the procedure of imaging,the plenoptic camera encodes the spatial light direction and intensity information to the same sensor plane.In order to recover the four-dimensional light field from the raw data recorded by the sensor,a series of preprocessing steps are required.In this paper,a new preprocessing method is proposed after in-depth study of the structure and imaging characteristics of plenoptic cameras.Compared with the customary method,this method simplifies the processing flow and reduces the error accumulation caused by different processing steps.By dealing with the error of microlens array explicitly,the confusion between angular information and spatial information is avoided,which lays a foundation for the future application.(3)In order to extracting the 3D information of the scene,this paper deeply studied the disparity estimation method of light field.Due to the dense and regular sampling characteristics of light field,the disparity estimation method of light field can develop algorithms which are significantly different from stereo vision.The customary method of light field disparity estimation is based on the manual design of features,and the initial disparity value is estimated by minimizing the cost function,and then further optimized with post processing steps.On the one hand,the computational complexity of such methods is often expensive,and it is difficult to work in real-time application.On the other hand,the features of manual design are inefficient to fully express the characteristics of the scene,making it difficult for such algorithms to obtain accurate results in fine structure,weak texture area and occlusion edge.Inspired by the development of deep learning in the field of computer vision,this paper establishes a disparity estimation model with multi-scale information.As shown by the experimental results,the model can extract the 3D information accurately and quickly from the scene.(4)In order to obtain the absolute scale of the scene,this paper establishes the conversion relationship between the disparity of light field and the absolute scale of the scene based on the projection model of the plenoptic camera.The effectiveness of the method proposed in this paper is verified by a final scene measurement experiment.
Keywords/Search Tags:Light field imaging, Preprocessing, Disparity estimation, 3D measurement, Deep learning
PDF Full Text Request
Related items