Font Size: a A A

Research On RGB-Image Guided Depth Image Reconstruction Method

Posted on:2020-12-08Degree:MasterType:Thesis
Country:ChinaCandidate:X H YangFull Text:PDF
GTID:2404330578955259Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
The disease diagnosis program combining artificial intelligence technology and medical image is a research hotspot in recent years.Especially for high-risk population with gastrointestinal diseases,timely examination by optical endoscopy and removal of precancerous lesions can reduce the risk of intestinal carcinogenesis,so it is important to diagnose the condition by deep reconstruction of intestinal information.The traditional method of understanding scene information by geometrical optics has low accuracy,and the size of the intestine limits the use of complex structure depth sensors such as binocular.At the same time,the spot in the intestine,occlusion and the surface texture features of the intestinal tissue are sparse,making the depth information acquired by the sensor lost partially,this is a challenge for 3D reconstruction of intestinal images.In view of the above problems,a depth map prediction method from a single image based on Multi-scale fusion Depth Convolution Neural Network(MDCNN)model is proposed in this paper.The deep neural network with multi-scales is used to extract rich image features,and then the deconvolution operation is used to fuse multi-scales into a convolutional neural network framework.The proposed model successfully solves the problem of inconsistent input and output sizes and missing part of feature information in traditional neural network model depth prediction.Secondly,the depth image of the convolutional neural network model is not clear,the local details are not smooth,and the input size is limited.Based on the previous network model,this paper proposes a joint Conditional Random Field and Multi-scales fusion Deep Fully Convolutional Neural Network(CRF-MDFCN).The Simple Linear Iterative Clustering(SLIC)-based super pixel segmentation algorithm is used to process the input RGB images,and input the super pixels into the improved fully convolutional neural network.Then the CRF-MDFCN model is unified as a deep learning framework.By utilizing the powerful inference ability of conditional random field probability graph model optimizes the local information,and the depth maps which are consistent with the size of input and more obvious in local details is obtained.Finally,the depth image obtained by the depth sensor is fused to supplement the depth loss caused by the external and self-factors of the depth sensor,and a clearer depth map can be obtained.This paper verifies the feasibility and superiority of the above model on NYU Depth v2 and Make3 D and the intestinal dataset,and compares it with the current mainstream depth prediction methods.The proposed model demonstrates certain advantages in subjective visual and objective evaluation.
Keywords/Search Tags:Convolutional neural network, Deep fusion, Conditional random field, Depth prediction, Super pixel segmentation
PDF Full Text Request
Related items