In computer vision,depth estimation and 3D reconstruction of images through images is one of the traditional and unresolved topics.However,the existing depth recovery method based on binocular stereo vision is sensitive to the change of the information of the texture in the scene,and the calculated disparity map often has problems such as missing and local spots due to the occlusion and illumination changes of the objects in the scene in the two cameras,so that the true information of the scene cannot be completely reflected,which may limit the stability and applicability of the method.Therefore,for further research,this paper studies the restoration of scene depth information based on the way that monocular camera moves to capture video.The main work is summarized as follows:(1)An improved calibration method based on Zhang’s camera is designed..In this paper,based on Zhang Zhengyou’s calibration method,the black and white grid board image is collected from multiple angles to complete the camera calibration.However,during the calibration process,the selection of the number and angle of the calibration image will seriously affect the calibration result.Therefore,by controlling the variable the number and the angle of the calibration images are respectively controlled with Zhang’s calibration method,and 15 calibration images are selected for calibration.Based on this,the distortion parameters of the first calibration result are brought into the second calibration as the known volume,which improves the accuracy of the calibration results.(2)A disparity calculation method for selecting the best image pair from monocular video is realized.Firstly,aiming at the problem of disparity calculation affected by scene texture information,a semi-global stereo matching disparity calculation method is presented,which adaptively chooses window size according to image standard deviation.Then,the influence of the position change of two image frames in horizontal direction on disparity effect is explored and the optimal displacement size is selected as the criterion,and a frame of image is used as a benchmark to find the satisfaction through SURF feature matching.In another standard image frame,the selected two frames are normalized,and the corresponding pixels are placed on the same line,and then the image parallax is calculated.(3)A depth estimation method based on fusion optimization strategy is obtained.Based on the fusion technique,the disparity missing in the disparity map is eliminated by interpolation and filtering so that the complete disparity map can be obtained.Then,based on the disparity value and depth relationship model,the depth information of each pixel is calculated and a set of three-dimensional spatial points of the image scene is obtained.Finally,the effectiveness of the proposed method is verified by experiments. |