As an emerging 3D imaging technology,light field imaging can not only record the position and intensity of light in space,but also capture the light angle information closely related to the scene depth and the three-dimensional geometry of the object,which couldn’t be obtained by the traditional camera.Therefore,optical field imaging technology has been widely used and developed in the fields of 3D reconstruction and measurement,3D target detection and recognition in recent years.An object point in space,imaged from two perspectives at a certain distance,would have different coordinate values in these both view angle images.This difference is called Parallax in the field of computer vision.The distance between the object point and the observation plane,that is,the depth of the object point,can be obtained by using parallax and calibration parameters of different viewing angles.Therefore,parallax information is a crucial clue to obtain the depth information of the scene.At the same time,because of its special optical imaging structure,the optical field camera can obtain the multi-angle image information of the target scene through a single exposure.Base on the rich information,we can obtain high-precision disparity information of target scene by designing a disparity estimation algorithm which accords with the structure of light field to provide sufficient and reliable 3D data support for advanced vision application based on light field imaging.Therefore,the research of parallax estimation algorithm based on light field imaging has important scientific research significance and application value.However,there are many challenges in the disparity prediction of the target scene using light field images,such as the influence of occlusion,weak texture,noise and other complex scenes on the accuracy of the algorithm’s disparity estimation.At the same time,the computation time is too long because of the redundant information of multi-view and the high complexity of the traditional optical field parallax estimation algorithm.In order to solve these problems,the research is carried out from the following parts:(1)Investigate the basic principle of light field imaging,study the parametric characterization model of light field and the way of obtaining the image of light field,and analyze the principle and feasibility of the disparity estimation of light field from the multi-representation of light field;(2)In order to improve the precision of disparity estimation,based on the principle of light field refocusing,a method of disparity estimation is proposed,which is based on the symmetry of multi-directional partial refocusing sequence of light field image,making full use of the effective information of multi-view of light field to improve the computational accuracy of the algorithm in occlusion complex scene.Experiments in the HCI 4D Light Field Benchmark show that the calculation accuracy of the proposed method has been significantly improved;(3)To further improve the accuracy of the algorithm and reduce the calculation time,the light field disparity estimation method combining context information of the scene is proposed.This method is Based on an end-to-end convolutional neural network,with the advantage of obtaining depth map from a single light field image.On merit of the reduced computational cost from this method,the time consumption is consequently decreased.In order to improve the computational accuracy,the multi-channel coding module can extract the multi-direction features of the light field,and the feature aggregation module can aggregate the context information of the pixel points.At each stage of the feature aggregation,the structural features of different fine degree of the central subaperture images are added.Experiments in the HCI 4D Light Field Benchmark show that the Bad Pix index and MSE index of the proposed method are respectively 31.2% and 54.6% lower than those of the comparison method,and the average calculation time of depth estimation is 1.2 seconds,which is much faster than comparison method. |