Font Size: a A A

Monocular Multi-view Depth Estimation Based On Deep Learning

Posted on:2021-10-08Degree:MasterType:Thesis
Country:ChinaCandidate:J Y ChenFull Text:PDF
GTID:2518306104986389Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
The fast development of computer hardware and software stimulates the boom of artificial intelligence,and in turn the AI-based technologies such as autonomous driving,mobile robots and drones have become current research hotspots.These intelligent applications rely on perception of environmental information,such as obtaining distance information between obstacles and the moving robots/cars in the environment.The camera has become a popular distance measurement sensor due to its advantages of low cost,small size,light weight and rich access to information.The depth map recovered by monocular depth estimation can be used as guidance information for obstacle avoidance and navigation of robots and unmanned vehicles,and can also be used in the fields of 3D reconstruction and augmented reality.Therefore,monocular depth estimation has become an important research topic in computer vision and robotics.Algorithms based on traditional three-dimensional reconstruction calculate depth by geometric methods such as epipolar search or triangulation,but it is difficult to reconstruct depth of low-texture regions of an image,and it is challenging to obtain a completely dense depth map.One the other hand,deep learning-based methods are able to obtain dense depth estimation but remain immature and still face problems such as low algorithm reliability,accuracy to be improved,and poor generalization ability.This thesis presents a multi-view monocular depth estimation method based on deep learning.Our deep learning model consists of a tightly connected optical flow network and a depth network.The two sub-networks are connected to the depth and optical flow conversion module of different scales.The matching information obtained by the optical flow network can be obtained through the conversion module to obtain depth information as a guide for depth prediction,and the depth information obtained by the depth network can also be converted into matching information and input to the optical flow network for optimization.Through multiple conversions of optical flow and depth at different scales,the two networks are tightly coupled and optimized simultaneously.When predicting the optical flow,we add the epipolar feature layer to the camera pose constraints,so that the prediction results are improved in static scenes.At the same time,our network can be extended to input any number of images from different perspectives and estimate the depth map of the reference image.Experimental results on six public datasets show that our method can obtain better depth estimation results than existing algorithms,and the algorithm takes less time than most existing depth estimation methods.At the same time,the optical flow accuracy output by the network is also better than or equal to current optical flow estimation methods.
Keywords/Search Tags:Depth estimation, Optical flow, Deep learning
PDF Full Text Request
Related items