Font Size: a A A

Recovery Of Depth Maps With High Resolution And High Accuracy

Posted on:2017-04-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y C WangFull Text:PDF
GTID:1318330566456053Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Depth recovery is to acquire the 3D geometrical information of the real scenes,which is an important topic in the communities of computer vision and computer graphics.This research focuses on recovering depth information with high accuracy and high resolution,especially for the deformable surfaces with complicate motion in the dynamic scenes.High-resolution and high-accuracy depth recovery is the essential in many reseach fields,topics,and applications.Consumer RGB-D cameras have been presented in recent few years,and they are popular for good mobility,low cost and high frame rate.However,the main disadvantages of these depth cameras are low-resolution and low-accuracy caused by chip size limitations and noise due to ambient illumination perturbation.Thus,it is a changllenging task to obtain high-resolution and high-accuracy depth images from RGB-D cameras.To handle this problem,this thesis focuses on the research on several aspects: fusion of passive stereo vision and active depth acquisition,3D scene flow estimation based on RGB-D data,joint depth denosing and super-resolution based on RGB-D image sequences,etc.The thesis presents a depth super-resolution framework by fusing depth imaging and stereo vision for high-resolution and high-accuracy depth maps.Depth cameras and stereo vision have their own limitations in some aspects,but their characteristics of range sensing are complementary.Thus,combining both approaches can produce more satisfactory results than either one.Unlike previous fusion methods,we initially take the noisy depth observation from depth camera as prior information of scene structure.The prior information of scene structure is also utilized to infer structural determinant information,like depth discontinuity and occlusion,which is essential to improve the accuracy of depth map in the fusion process.In succession,the prior knowledge helps to overcome difficulties of intensity inconsistency in image observation from stereo vision component.Experimental results demonstrate effectiveness and accuracy of the proposed method.An improved dense scene flow method based on RGB-D data is proposed.The accuracy of scene flow is restricted by several challenges such as occlusion and large displacement motion.When occlusion happens,the positions inside the occluded regions lose their corresponding counterparts in preceding and succeeding frames.Large displacement motion will increase the complexity of motion modeling and computation.Moreover,occlusion and large displacement motion are highly related problems in scene flow estimation,e.g.large displacement motion often leads to considerably occluded regions in the scene.To handle occlusion,we model the occlusion status for each point in our problem formulation,and jointly estimate the scene flow and occluded regions.To deal with large displacement motion,we employ an over-parameterized scene flow representation to model both the rotation and translation components of the scene flow,since large displacement motion cannot be well approximated using translational motion only.Furthermore,we employ a two-stage optimization procedure for this over-parameterized scene flow representation.In the first stage,we propose a new RGB-D PatchMatch method,which is mainly applied in the RGB-D image space to reduce the computational complexity introduced by the large displacement motion.According to the quantitative evaluation based on the Middlebury dataset,our method outperforms other published methods.The improved performance is also comprehensively confirmed on the real data acquired by Kinect sensor.A depth enhancement method is described for RGB-D image sequences from dynamic scenes.The proposed method consists of two stages: depth image alignment and fusion.Depth image alignment seeks to generate multiple depth image observations for a selected depth image by finding similar 3D structures in its spatio-temporally neighboring images,and depth image fusion aims to generate a high-resolution result by fusing these observations.For depth image alignment,we propose a superpixel-based motion estimation approach for RGB-D images,which performs robustly in the presence of large displacement motion and occlusion.For depth image fusion,we model the task as a regression problem,and design a specific deep convolutional neural network which is able to learn the complicate mapping function between multiple depth image observations and the fused depth image by training a large amount of data.We qualitatively and quantitatively evaluate our method on public RGB-D image sequences to show its superior performance.
Keywords/Search Tags:depth recovery, passive stereo vision, active depth sensing, scene flow, large displacement motion, RGB-D camera, depth denoising, depth super-resolution, convolutional neural network, deep learning
PDF Full Text Request
Related items