Font Size: a A A

Unsupervised Light Field Depth Estimation Based On Deep Learning

Posted on:2021-05-24Degree:MasterType:Thesis
Country:ChinaCandidate:E C ZhouFull Text:PDF
GTID:2428330605482505Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Depth estimation can provide three-dimensional information of targets or scenes for applications such as face recognition,face live detection,and 3D reconstruction.It extracts depth cues from a monocular image,a binocular image,or a multi-eye image and restores the depth map.Unlike traditional camera imaging and multi-camera imaging,light field imaging based on microlens arrays can obtain image arrays with different perspectives,such as 9?9,11?11,13?13,etc.,that is,a collection of images at different perspectives.The light field depth estimation takes the light field image as an input,thereby recovering the depth information of the central sub-aperture image or multiple sub-aperture images.Because the light field image provides rich 4D light information,it can more effectively solve the problems faced by traditional depth estimation methods such as no texture,weak texture,occlusion,and noise in depth estimation tasks.Convolutional neural networks have strong feature extraction capabilities.This paper uses deep learning methods to complete the light field depth estimation task.Aiming at the lack of Ground Truth in most real scene datasets,it is difficult to use supervised training.This paper studies and proposes unsupervised light field depth estimation methods for monocular input and multi-eye input.This method shows good performance in dealing with non-texture,weak texture,occlusion,and noise problems.The main innovation of this paper is to propose a general unsupervised light field depth estimation network structure and the design of its unsupervised loss function.The unsupervised light field depth estimation with monocular input aims to estimate the light field depth estimation from the central sub-aperture image.Three novel unsupervised loss functions are designed according to the characteristics of the light field.They are(1)Photometric loss function,which calculates the similarity of each pixel between the generated image and the target image according to the illumination consistency assumption between the light field images and the predicted parallax result.(2)Defocus loss function,which uses the all-focus image generated by the refocusing characteristics of the light field,and calculates the similarity between the all-focus image and the central sub-aperture image.(3)Symmetry loss.Aiming at the occlusion symmetry in the light field image,a loss function for the parallax consistency constraint between symmetrical sub-aperture images is proposed.In the multi-eye input unsupervised light field depth estimation,the input multi-view sub-aperture image provides occlusion and multi-view information for depth estimation.It mainly includes:(1)The network extracts occlusion information based on the input multi-view image and the predicted disparity map,and uses it to calculate the unsupervised loss,thereby mitigating the impact of occlusion on depth estimation.(2)The CAE loss of angular entropy consistency is proposed.According to the generated angular resolution image,the angular entropy consistency constraint is used to improve the robustness of the algorithm in the occlusion area and noise environment.Based on the above strategies,this paper proposes two light field depth estimation methods: Unsupervised Monocular Net(has been accepted by IEEE Trans.Image Processing)SCI first group and Unsupervised Multi-input Net.The performance evaluation of the synthetic light field data evaluation dataset and the real scene dataset,Showing good depth estimation performance.
Keywords/Search Tags:light field, unsupervised, depth estimation, stereo matching, occlusion
PDF Full Text Request
Related items