Font Size: a A A

Camera Array Based Light Field Imaging And Depth Estimation

Posted on:2015-07-13Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z L XiaoFull Text:PDF
GTID:1228330452965506Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Light field imaging is well known for its focus-after-shoot, extendable depth of field andswitchable points of view abilities, which can be considered as the next generation imagingtechniques. In the fields of digital imaging and computer vision, many researches has beenproposed on light field imaging in recent years. Different from traditional imaging system, onthe image plane, the light field system not only records the distribution of light intensity, butalso records the angular information of the incident light rays. Unfortunately, the existing lightfield system can abtain only the discrete angular information. Thus, there would be strongangular aliasing artifacts if rendering image from such an angular under-sampled light field.Apparently, the angular aliasing artifacts can greatly deteriorate the image quality. Theangular information of light rays are also coupling with the scene depth information. So, lightfield based depth estimation is becoming an important issue in light field theory. Based on an8*8camera array system, both the light field imaging theory and light field based depthestimation are deeply studied in this dissertation, which includes the following contributions:(1) A novel angular aliasing detection algorithm is proposed, which is based on a speciallight field image generation algorithm by employing some random aperture patterns. Tospecifically explain the angular aliasing artifacts, we first model the causes of aliasing effectsin a2D light field framework. This model can mathematically explain the fact that the angularaliasing is determined by the focal length of imaging system, the angular sampling density,the scene depth and the scene textures simultaneously. Then, we point out that the aliasedimage pixels are very sensitive to sampling density varying on the camera plane. Based onthis idea, we use the coefficient of variations of imaging set derived from some randomaperture patterns as an aliasing metric. Most importantly, the proposed algorithm is free ofdepth estimation and texture independent. In the experiments part, we have validated theproposed algorithm on several groups of real light field datasets.(2) Based on the aliasing detection algorithm, we propose a novel aliasing reductionalgorithm. Distinct from existing methods, we address angular aliasing in the light fieldrendering stage. Futher more, the proposed algorithm does not need scene depth as therendering prior. To avoid boundary problem, we introduce a multi-scale gradient field possionfusion algorithm, which can seam different level of non-aliased image parts together to reducethe light field aliasing. We test the proposed algorithm on both synthetic data and real scene data sets. In the experiments, the proposed rendering can significantly outperform thetraditional light field rendering and prefiltering light field rendering, and obtain similar resultsas the depth-awared light field rendering.(3). Based on the analysis of different depth cues within light field datasets, we propose anovel scene depth estimation algorithm. We first point out that, the differences betweendisparity depth cue and image focus cue can be explained in light field frequency domain.Then, we employ normalized-cross-correlation and sum-modified-laplacian to extract thesetwo kinds of depth cues respectively. To improve the accuracy and robustness of the depthestimation, we introduce a linear combination to fuse these two kinds of depth cues together.Comparing with single depth cue based methods, the proposed method can generate moreaccurate scene reconstrution results, even for those regions including discontinuous depth andsimilar textures.(4) Foreground occlusion is a significant challenge in depth estimation problem. In thedissertation, we propose a K-means clusters based foreground occlusion handling depthestimation algorithm, which needs no foreground detection. First, we characterize thedifferences between multi-view reconstruction with and without foreground occlusion.Considering both scene depth and appearance are unknown, we propose a generalized modelfor depth estimation. Then, we propose an coarse-to-fine iterative reconstruction approach inglobal optimization framework, which is well performed on the camera array system. Evenwhen all views are partially occluded, our approach can recover accurate depth map as well asscene appearance. Experimental results have indicated that our approach is more robust toforeground occlusions and outperforms the state-of-the-art approaches.
Keywords/Search Tags:Camera array, Light field imaging, Aliasing artifacts, Depth estimation, Foreground occlusion
PDF Full Text Request
Related items