Font Size: a A A

Research On Depth Measurement Methods Of Monocular Image

Posted on:2019-01-19Degree:DoctorType:Dissertation
Country:ChinaCandidate:L X HeFull Text:PDF
GTID:1318330545961792Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Compared with other sensors,the camera has the characteristics of being able to intuitively reflect the objective world,large amount of data,and abundant information,and is usually low in price and convenient in configuration.It becomes one of the preferred choices for automatic equipment and robots for environmental perception.The depth information of the scene is lost when we get 2D image by an ordinary camera.It makes impossible for the machine to perceive information about the distance,sizes,and speeds of the objects in the scene by the method of computer vision.Therefore,depth information needs to be recovered from the two-dimensional images,namely,depth measurement.Depth measurement could be widely applied to many fields,such as industry automation,intelligent robot,target detection and tracking,Intelligence Transportation,3D modeling,and 3D video production,etc.There are many kinds of depth measurement methods,of which the depth measurement method based on monocular images has become a research hotspot because of its simple equipment,low cost,and easy operation,etc.And the monocular camera is small in size and light in weight.It needs or can only use the image depth measurement method based on monocular vision to obtain depth information in some applications,such as space size limit or load limitation,and hand eye system.However,at present,the measurement method is still immature.It is very necessary to do detailed researches in terms of the calculation principle and technical method.This thesis focuses on the method of image depth measurement based on monocular images.The main work and innovation are as follows:(1)A method for measuring the absolute depth of target objects in images based on entropy and weighted Hu invariant moment is proposed.This method uses an ordinary monocular camera to take two images of the same scene while keeping the camera's parameters unchanged.First,we move the camera along the optical axis and two images are captured at two locations,respectively,and the distance between them is d.Second,the objects in the images are acquired by the image segment method of LBF model,and areas of the objects are computed,respectively.Third,the objects are matched automatically by the method of combined the relative difference ratios of entropy of the object image and the weighted Hu invariant moment.Lastly,the objects depth can be calculated by the formula that is derived by author according to the optical imaging principle.The real scene images are used to verify the method and compare it with other methods in our experiment.Experimental results show themethod is effective.(2)A novel approach based on SIFT(the Scale Invariant Feature Transform)is presented in this thesis.The approach can estimate the depths of objects in two images which are captured by an un-calibrated ordinary monocular camera.Two images of the same scene should be obtained by the same method as above.Then image segmentation and SIFT feature extraction are implemented on the two images separately,and objects in the images are matched.Lastly,an object depth can be computed by the lengths of a pair of straight line segments.In order to ensure that the best appropriate a pair of straight line segments are chosen and reduce the computational complexity,the theory of convex hull and the knowledge of triangle similarity are employed.The experimental results show our approach is effective and practical.Because the absolute depth values of the objects are calculated by the lengths of the straight lines those are formed by two SIFT feature points in the measured objects,the method is robust to the partial occlusion or absence of the measured object in the scene.The experimental results show that the accuracy of the method is higher than that of the others'.(3)A single image depth measurement method based on gradient information and wavelet correction is proposed.Firstly,the defocus radius,namely,the defocus degree at the edge points in the image are measured by the gradient information of the image is employed.But the two situations are appeared in the image,usually.namely,the edges are very close or cross,and the difference of color between the object and the background is very small.The defocus radius measured directly with gradient information will be smaller than its true value when one of the two situations is exist.Therefore,the error should be corrected.It must be determined which defocus radius values of measured edge points should be corrected.The synthetic coefficients of wavelet transform coefficients on the original image are employed to do it.And the values are corrected according to the formula in this thesis.Thus,a sparse depth map is obtained.To eliminate the errors caused by noise,the sparse depth map is filtered by the joint bilateral filtering.Finally,the sparse depth map is extended to dense depth map by using Matting Laplacian method.The experimental results show that the method has high measurement accuracy.Obviously,the above-mentioned three methods all require only one ordinary camera and do not need to calibrate the internal and external parameters of the camera.The first two methods are suitable for applications where rails or manipulators support for moving cameras can be installed,as well as for automatic equipment or hand-eye systems for more precise control.The third method can calculate the relative depth image of the entire scene no matter it is static or dynamic by taking only one single image captured by an ordinary camera,and it is easier to operate.
Keywords/Search Tags:monocular vision, image depth, entropy of image of object, Hu moment, SIFT, convex hull, wavelet analysis, defocus
PDF Full Text Request
Related items