Font Size: a A A

Technology Of Computer Vision Measurement Of Structural Parts Size

Posted on:2012-11-05Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Q HouFull Text:PDF
GTID:1118330368978853Subject:Mechanical design and theory
Abstract/Summary:PDF Full Text Request
As a non-contact measure, computer vision measurement technology has gained more and more attention. On the basis of exploring the system scheme and software and hardware composition of computer vision measurement technology, this paper also conducts in-depth research on image feature extraction, camera calibration and fundamental matrix estimating of vision measurement and other key technologies. Besides, it carries out experiment and applied exploration on vision measurement of structure space size.Image feature extraction is one of key technologies of computer vision measurement. The accuracy of it will directly affect measurement accuracy of system. This paper conducts research on image feature extraction algorithm of edge and corner.The edge is one of the most basic characteristics of image, which means the collection of pixels whose grey-scale of pixels has step change or roof change. In the vision measurement, the early utilized edge detection algorithms was mostly pixel level, such as the normally-used Sobel operator, Roberts operator, Prewitt operator, Laplacian operator and Canny operator. They can only judge in which pixel the edge is, but not the subdivision in the pixel. Therefore, Hueckel firstly put forward sub-pixel edge detection theory, which gets sub-pixel location of edge by fitting. After that, detection algorithms based on moment and interpolation were also proposed. The algorithm based on moment utilizes the thinking of non-changing moment in grey-scale moment and space moment to calculate edge location. Algorithm based on interpolation uses function interpolation thinking to conduct interpolation calculation to grey value. By calculating first derivative extreme value or second zero crossing, the location of sub-pixel of edge is gained. These kinds of algorithms are called sub-pixel edge detection algorithm.As the actual location of edge cannot be determined, there is no way to ensure whether the detected edge is the nearest one to the actual edge. Therefore, this paper proposes a new evaluation method for detection accuracy of sub-pixel edge detection algorithms. The analysis result shows that the advantage of detection algorithms based on moment and interpolation is high-speed calculation. However, they are very sensitive to noise, so the detected data is easily fluctuated. As fitting uses least squares interpolation of grey value in hypothetical edge model to gain sub-pixel edge location, it has strong robustness to noise with good stability. As a result of this, the gained sub-pixel edge accuracy is always higher than that of the last two methods.Corner is another vital feature of image. It has an essential role in understanding and analyzing the image. Many people believe corner is a point whose two-dimensional grey scale has upheavals or a point that has large curvature in edge curve of image. In 1988, Harris et.al. have made some improvement in Moravec operator to propose prominent Harris operator which is also called Plessey corner detection operator. In 1997, Smith and Brady put forward another corner detection method-SUSAN extraction operator.However, the corner detection accuracy of these two methods can only reach pixel level. Forstner operator was a commonly used point location operator in photogrammetry.. Threshold has to be determined in Forstner operator application . The improved Harris operators were raised after that,and they all gained sub-pixel location accuracy. On the basis of the image measurement accuracy, this paper proposes a sub-pixel corner detection method. The basic principle of it is to utilize edge fitting algorithm to get edge equation that forms corner, and then, to find the sub-pixel coordinate where intersections get corner.In computer vision measurement, calibration is a key link. With the uprising requirement on image measurement accuracy, more demand will be on camera calibration accuracy. The existing camera calibration method can be divided into two kinds: traditional camera calibration and camera self-calibration. If higher accuracy is required in applied situation and parameters do not change frequently, then, traditional calibration is chosen. Generally according to the feature of parameter solution, traditional camera calibration falls into linear method, non-linear method and two-step method. Linear method does not need iterative operation, such as direct linear transformation (DLT),which needs to simplify the non-linear model, and it cannot reach high calibration accuracy. Non-linear optimization method directly solves non-linear equation by iterative operation. It can have high calibration accuracy, but initial iterative value choice has clear influence on optimization calculation result. The first step in two-step method is to solve camera parameters by taking advantage of linear transformation. Secondly, by taking lens distortion into consideration, use the optimization operation to improve calibration accuracy by regarding the parameters in first step as initial values. In two-step method, the representative one is Tsai method, Heikkila method and Zhang Zhengyou method. Except the differences in distortion model, The major difference among these three methods lies in establishment of the linear equation set in solution of initial calibration value. Tsai makes use of radial consistent constraint to establish linear solution equation set; Heikkila does not take non-linear distortion into consideration, but uses transformation relation of spatial points projected on pixels to establish linear solution equation set; Zhang Zhengyou takes advantage of orthogonality of vectors that form rotation matrix to establish linear solution equation set. To pinhole model of camera imaging, this paper provides an improved method of model for solution of internal and external parameters in model. Firstly, by making use of the feature points in calibration board, the actual light center image coordinate is solved by method of least squares line. Then, the other parameters in model are gained by Tsai method. Finally, use parameters gained in Levenberg Marquardt method to conduct overall situation optimization. The results show that this method has low re-projection error than Tsai method, and it has higher accuracy. Hence, it is an effective calibration method for camera internal and external parameters. As the edge of object in image may has displacement with the change of background light from light to darkness, there inevitably will be measurement error by adopting the difference of both sides to get distance. Aiming at the influence of light in calibration process on calibration result, this paper conducts theoretical analysis with combination of experiment, and it offers improvement measurements.As single camera calibration only establishes corresponding relation between image plane and one space plane, it is suitable to measure the size in the plane. However, when measuring sizes of structural object, the single camera cannot measure the depth information of structure. In view of features of structure size, this paper adopts binocular vision to measure. Binocular vision measurement system usually comprises of two cameras in simulation of human eyes. It senses depth information of object by principle of parallax. Therefore, besides the confirmation of external and internal parameters of every camera in vision measurement system calibration, geometrical relationship of two cameras has to be sure. This paper elaborates the binocular vision model from epipolar geometric constraint, fundamental matrix and essential matrix, etc. Besides, it makes comparison discussion on solution method of fundamental matrix. The experiment shows that compared with linear method and iteration method, robust method has better accuracy and stability in solution of fundamental matrix. This paper proposes improved operation of fundamental matrix, which has lower error average and variance than those of robust method, effectively enhancing accuracy and stability of fundamental matrix. Though calculation takes long time, calculation of fundamental matrix is done in calibration process. Therefore, the measurement speed will not be affected. Finally, this paper collects all technique links of above vision measurements together. Under laboratory conditions, gauge block size and axial size between prefabricated camshafts are measured respectively to test measurement accuracy of vision measurement structure size.
Keywords/Search Tags:computer vision, edge detection, binocular vision, calibration, fundamental matrix, Structural Parts
PDF Full Text Request
Related items