Font Size: a A A

Monocular Based Camera Pose Estimation

Posted on:2016-02-17Degree:DoctorType:Dissertation
Country:ChinaCandidate:P ChenFull Text:PDF
GTID:1228330467982606Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Camera pose estimation is a classic problem in photogrammetry and computer vision community. Given its importance on object localization, it is widely used in daily and industrial applications, such as augmented reality, visual servo, human computer interaction, vision aided guidance and aerospace industry etc., and new techniques have been emerging endlessly. Taking the opportunity to develop a rendezvous and docking sensor by our research group, three aspects in camera pose estimation have been studied:firstly, how to estimate camera pose when the intrinsic parameters of a camera and the correspondences between the3D/2D feature points are known; secondly, how to estimate camera pose when the3D/2D feature points are correspondenceless but the intrinsic parameters are known; thirdly, how to track camera pose varying through a video sequence. The contributions of the thesis are summarized as:(1) When the camera intrinsic parameters and the correspondences between the3D/2D feature points are known, given the fact that most iterative pose estimation algorithms cannot achieve efficiency and accuracy at the same time, a fast camera pose estimation algorithm is proposed. The algorithm is composed of an iterative estimation part and a refining process. The iterative estimation part is based on object space collinear error and capable of estimating the camera pose in a shorter time. The refining process employs virtual control points to further adjust the camera pose according to the result of the iterative estimation part. Experiments show that the proposed algorithm is faster and more stable while the estimation precision is retained, especially when the number of feature points is large.(2) When solving the simultaneous pose and correspondences determination problem, given that the original gravitational pose estimation algorithm cannot work when2D false feature points are presented, an improved gravitational pose estimation algorithm is proposed, then an extended gravitational pose estimation algorithm is put forward to be applied in more general cases when both3D occluded feature points and2D false feature points are presented. By using a distance matrix and an assignment matrix, a cost function is established in the improved algorithm, and the mechanical analyzing process in each loop has also been improved. To further handle the cases when both3D occluded feature points and2D false feature points are presented, based on the improved algorithm, the single link algorithm and the SoftPOSIT algorithm are introduced to compose the extend algorithm. Simulations and real image tests show that, the extended gravitational pose estimation algorithm can be applied when both occluded3D feature points and false2D feature points are presented, furthermore the proposed algorithm is faster, more possible to achieve the correct correspondences, and more precise than most state-of-the-art simultaneous pose and correspondence determination algorithms.(3) In the application of the rendezvous and docking sensor, since the cooperative target is blinking periodically in a video sequence and the mean shift algorithm is prone to generate a large offset, algorithms to detecting and tracking the cooperative target in a video sequence are proposed. By combining its result with the pose estimation algorithms,3D tracking can be further realized. Taking the advantage that the cooperative target is blinking periodically in a video sequence, the accumulated frame difference algorithm is employed to detect the cooperative target. Then a tracking window is established and the state of the cooperative target is recognized through clustering images. To track the position varying of the cooperative target in consecutive frames, a line search based2D tracking algorithm is proposed. The similarity between histograms is taken as the objective function in the proposed algorithm. The average optical flow in the tracking window is employed as the searching direction. In order to acquire the position of the cooperative target in current frame, the advance and retreat method and the golden section method are applied to find the optimum step size. Experiments on a video sequence show that, the proposed algorithms can accurately detect and track the cooperative target, and the3D pose varying of the cooperative target can be continuously detected by combining camera pose estimation algorithms to the result of the2D tracking algorithm.
Keywords/Search Tags:Monocular Vision, Camera Pose Estimation, GravitationalField, 3D Tracking, Rendezvous and Docking
PDF Full Text Request
Related items