Font Size: a A A

Research On Camera Localization Based On Monocular Vision

Posted on:2010-10-09Degree:MasterType:Thesis
Country:ChinaCandidate:H J ShenFull Text:PDF
GTID:2178360272996382Subject:Carrier Engineering
Abstract/Summary:PDF Full Text Request
Computer vision has rich information. And it's with high intelligent level. Because of those characteristics, the vision orientation has been applied extensively in a lot of fields, such as aerospace, aviation, navigation and ground. The navigation system based on this technology has some advantages, including small volume, low cost, high autonomy. It has been successfully used in low-altitude aircraft navigation, Unmanned Aerial navigation and probe landing navigation. Nowadays, the theory is gradually getting ripe.Due to the own characteristics of computer vision, it is possible to use it to realize self-positioning and tracking during lunar rover navigation process. The essence of lunar rover self-positioning is as follows: first, solving the change of the camera installed on the lunar rover; and then, obtained the lunar rover position indirectly. Therefore, the research of camera positioning based on monocular vision in this paper has certain theoretical and practical significance. And the research also can make positive contribution to vision navigation of the lunar rover.The positioning method based on visual sensor uses the relationship between the position of the image pixel and the scenic point. The specific process is as follows: Solving the camera position in the world space from the character point position both in the image and in the world based on camera model. There is another method to realize positioning. It is doing a series of geometric or other operations with the image and obtaining the three-dimensional position of the camera. This method is an absolute positioning method. There are five parts in this paper. This main content of this paper is as follows:1. Establish the single camera imaging model.2. Research a two-step calibration method of the camera based on planar template. This part includes designing the calibration experiment, solving the calibration results and so on.3. Research a calibration method of the camera based on geometric relationship. This part includes designing the real image experiment, analysis of the results and so on.4. Research the feature extraction and matching algorithms based on image gray-scale information.5. Research a camera positioning method based on image mosaicking. This part includes designing the real image experiment, analysis of the results and so on.The common used moving camera positioning method based on monocular vision is a linear camera positioning method based on 2D planar template proposed by Zhang Zhengyou. This method absorbs a template image in any orientation, obtains homography matrix based on the bit map coordinate of the template image and matching of the relatively points both in the image and the template. If the bit map is too big, the matching relationship would be a burden. And one difficulty of image processing and analysis is matching. The correctness of the matching directly influences the precision of calibration. So in order to obtain matching the corresponding points between the image and the template, it generally needs manual intervention. Therefore, the disadvantage of this positioning method is that it cannot achieve rapid, real-time independent positioning.Thus, this paper starts with the position change of image pixel in two adjacent images to realize the camera positioning in a plane. It can avoid excessive human intervention and enhance real-time performance of the camera positioning method.The first camera positioning method of this paper is based on geometric relationship. This method is to determine the camera position in 2D plane. The camera position in the word is finally determined by using some known factors of the camera environment and some information of the site and depending on the triangular geometric relationship among the monocular distance-measurement model, the camera and the positioning factors.This paper adopts the camera calibration method of the camera based on planar template to obtain internal parameters of the camera to realize the distance measurement. The calibration procedure is based on Matlab platform. First, obtain some quadrilled template images. Second, extract corners of the template images respectively. Third, establish homography matrix and solve internal parameters by linear function. Last, optimize the linear calibration result using Levenberg - Marquadt algorithm.Through some image processing technology including image smooth, fixed threshold segmentation, image inflation and tinning, two character points and image pixel coordinate of them can obtained. Then the parameters used by distance measurement are achieved.The camera's 2D coordinate in the world can be figured out quickly as follows: first, establish the geometric relationship between the camera and two feature points (elements of fix in the scene). Second, obtain the horizontal distance between the camera and feature points, and solve the 2D coordinate of feature points in the actual scene. This method is verified by experiments, has high precision, and can avoid excessive human intervention.The second positioning method in this paper is based on image mosaicking. This method uses affine transformation of image registration during the mosaicking process. The transformation reflects the position change between two images. The change is caused by the camera moving. Then, the change of the camera, including movement direction of the camera, rotation angle around optical axis, can be obtained indirectly based on space geometrical relationship. And then the camera positioning can be realized preliminarily.When to image registration, the paper uses characteristic extraction based on the information of gray to realize image registration. Then through the method of the correlation coefficient to find 3 matching points in the benchmark image and the stitching image, by which we can calculate the affine transformation parameters between two images.Establishing the affine transformation model between images, calculating the affine transformation parameters and having images to be fusion, then, we can obtain the result of image- mosaiced. In this process we use the overlapping area of linear method to eliminate obvious excessive joint. At last, according to the affine transformation parameters obtained, we can further deduce the changing position of the camera. Namely, we get the direction of motion and the rotated angle of the camera. Verified by experiment, although this method can not completely reflect the coordinates of the camera as the first kind of method, it can get the information of the video rotary's motion and obtain good results of image- mosaiced.Based on Matlab 7.1 and Visual c + + 6.0, the paper develops the software and part of the algorithm, gets through the real image experiment. At last this paper gets satisfied experimental results, realize the positioning of camera preliminary.
Keywords/Search Tags:Camera localization, Camera calibration, Monocular measurement of distance, Image stitching, Image Registration, Image mosaic
PDF Full Text Request
Related items