Font Size: a A A

Vision-Localization Method Of Robot Based On The Distance-Constraints Of Feature Points

Posted on:2009-03-05Degree:MasterType:Thesis
Country:ChinaCandidate:X J WuFull Text:PDF
GTID:2178360242980077Subject:Mechanical Manufacturing and Automation
Abstract/Summary:PDF Full Text Request
Mobile robot technique is one of the most popular research domains, and is widely applied in many places such as military, daily life, etc. In this technique, navigation for the mobile robot is a basic condition to complete its mission. But the most important problem that robot navigation needs to face is how to locate the position and orientation of the mobile robot. Up to now, there are many colleges, institutes and companies researching on robot localization, and have proposed various robot-localization methods. In principle, they can be classified into two categories: relative localization and absolute localization. Though the cost of relative localization is low comparatively, it is hard to eliminate some systematic errors and unsystematic errors of the mobile robot and the drift error will cumulate over time. So, relative localization is not suitable for the place where the robot always needs to be located precisely. Absolute localization usually uses kinds of sensors, such as sonar, laser radar, GPS, indoor-GPS, etc, to locate the mobile robot. But all of the absolute localization methods must face the contradiction between locating precision and cost. With computer vision and photogrammetry, this paper presents a new robot-localization system.This system consists of CCD camera, fixed focus lens, infrared filter, image grab card, computer, mobile robot with six feature points, measure software, etc. The infrared emitted by feature points passes through infrared filter, fixed focus lens and arrive the surface of CCD. After photoelectric conversion and signal conditioning, the system obtains the feature points'gray image. After image processing, send the coordinates of all image feature points (the image of feature points) and the distances among feature points to the mathematical model of the localization system. Then, the robot's position and orientation in its working environment can be gotten.This paper establishes the mathematical model of the localization system based on pin-hole model. Using collinearity condition and distance-constraint among the feature points we can get a nonlinear restrict equations set. Subsequently, an algorithm relying on SVD decomposition is used for solving the equations set and then we can get the coordinates of all feature points in camera coordinate system. At last, the system can confirm the position and orientation of the robot utilizing coordinate transformation and least square method.The proportion of the image feature points occupying in gray image is so small that we can't segment the gray image by traditional methods. For solving this problem, this paper proposes a new adaptive threshold image segment method. After setting the initial threshold, sign the objects in the gray image and return the maximum label by connected components label strategy. There are several searches for the segment threshold regarding the maximum label as judgment condition. When the maximum label satisfies termination condition, output the segment threshold and the label of every image feature point. Experiment illustrates that our image segment method is effective.After analyzing several sub-pixel algorithms for circular or elliptic object location, this paper uses BIGSW (bilinear interpolation gray square weight algorithm) to get the sub-pixel coordinates of the feature points in gray image. Simulation experiment proves that this algorithm is accurate and robust. In addition, this paper supplies a strategy for matching each feature point and its corresponding image feature point.At present, there are many camera calibration methods, and we can classify them into two categories: photogrammetric calibration and self calibration. Considering the need of our subject and each calibration method's characteristics, this paper introduces the nonlinear model which includes the camera's radial distortion and tangential distortion to Zhang 2000 calibration method. By solving homoemorphy matrix, confirming the initial value of camera's intrinsic and extrinsic parameter, and optimization calculation, the camera calibration model for our system is established. Then, we do the calibration experiment with planar calibration board and Matlab procedure.For verifying the effectiveness of the mathematical model of the localization system, a field experiment has been done with CCD camera, 760 nminfrared filter, X64-CL_iPro image grab card, DELL workstation, locating disc(instead of mobile robot), etc. After initializing the system, experiment is performed to ascertain the position measuring precision and orientation measuring precision. The main results are as follows: position error is±1.23mm and repeatability precision is 0 .42mm on X wdirection, position error is±0.67mm and repeatability precision is 0 .51mm on Yw direction, orientation error is±0.70o and its repeatability precision is 0. 40o. So, the mathematical model of the localization system is valid. Finally, this paper analyses the reason of generating systematic error and prospects the future work about this system.
Keywords/Search Tags:Robot localization, Photogrammetry, Camera calibration, Image segment, Distance constraint
PDF Full Text Request
Related items