Font Size: a A A

3D Size Measurement Based On Digital Image

Posted on:2009-07-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y Y TianFull Text:PDF
GTID:1118360272476559Subject:Mechanical design and theory
Abstract/Summary:PDF Full Text Request
Computer vision research aims at getting structure information of three dimensional objects from two dimensional images. Three dimensional mesurement is one of the most important theories and it is used widely as one of the important research fields of computer vision. However, spatial point is the essential unit made up of three dimension structure. In theory, points are made up of line and lines are made up of plane. Moreover, planes are made up of three dimension structure. In computer vision, three dimensional of points is most essential not only full pixel but also three dimensional shapes. On occasions there are many characteristic points. The positions of these points are determined in order to determine three dimension structure. So these characteristic points are made up of spatial structure image. Therefore, the paper discussed the problem of three dimension mesurement in computer vision. The advantages of three dimension mesurement are untouched, high speed, high precise and high anti-jamming and so on.The essence of restoring spatial points is to get three dimensional coordinates based on camera optical model. While calibration of inner and outer parameters of model is to calibrate transform relationship between world and camera coordinate frame, between camera and image plane coordinate frame, between image plane and pixel coordinate frame and between world and image pixel coordinate frame. So we finished CCD camera monocular vision calibration. In order to realize three dimensional mesurement of points, monocular vision is not enough and it need get spatial depth. So based on monocular vision, we carry out to binocular vision calibration. The work to three mesurement concludes sub-pixel edge detection arithmetic based on fitting method, analysis of sub-pixel edge detection based on amendatory Bezier function, binocular vision calibration and three dimensional mesurement of points used to measure parts size.Sub-pixel edge detection arithmetic is based on fitting method. For orientation precise of edge detection, it can be classified with pixel edge detection and sub-pixel edge detection. Pixel edge detection operators are Sobel, Laplacian, Canny and so on. Their advantages are high speed but they can't orientate edge precisely. Because cell of standard sense plane of CCD's size is normal, the edge of image doesn't always fall on the edge of cell exactly. It can result in real edge information lost of object during imaging. The aim at sub-pixel edge orientation is to find out points inner pixel on the real edge of object by arithmetic exactly or approximatively. Hueckel provided sub-pixel edge detection technique at first. Existing sub-pixel edge detection operators are classified with three types based on spatial moments, least square fitting and interpolation. Among them fitting arithmetic is least square fitting on the hypothesis of grey value of edge model to get sub-pixel edge orientation which is higher precise than the former two methods. It is stable and rough for noise. Fitting method is classified with polynomial fitting and least square fitting. The latter is adapted. We provide a kind of sub-pixel edge orientation arithmetic, amendatory Bezier fitting, based on least square fitting method using model of distributed grey of edge and write the program according to it. The analysis of two methods show that edge fitting residual adapting fitting provided in the paper is smaller than by Gauss fitting. It shows sub-pixel edge position by fitting method is accuracy and disperser. It is fit to be applied to sub-pixel edge orientation of linear edge for planar parts.The analysis of sub-pixel edge detection arithmetic is based on amendatory Bezier function. Arbitrary plane can be looked as combination made up of numberless small units. While, every small unit can be looked as a function onδ. We can get arbitrary distributing of photo field after imaging by linear iterative on the circumstance of light vibration distributing which we can clearly understand for lens or imaging system. So we can get intensity distributing of image plane. We analysis sub-pixel edge detection arithmetic and Gauss fitting arithmetic based on amendatory Bezier function by designing many experiments.Binocular calibration is based on CCD camera. Computer vision system begin to obtain image information by CCD camera in order to compute position of 3D object and geometry information of shape and then to recognize object. Lightness of each point in the image plane can reflect intensity of reflex of a point of the spatial object. While, the point position in the image plane is related to accordingly geometry point position of in the spatial object surface. The relationships depend on the geometry model by CCD imaging. Camera calibration and untouched measurement are important parts which are be obtained by testing and computing parameters of geometry model by camera imaging. The goal is to confirm according relationship between image coordinate frame of camera and 3D world coordinate frame of spatial object. Based on 2D coordinate of image plane, we can deduce the real position of the according spatial object in order to restore inner and outer parameters of camera. We can restore 3D data information of observed object to realize 3D measurement using these parameters and building camera stereo imaging model with stereo vision. In binocular stereo vision, we must confirm relative position and orientation between two cameras. In many occasions, inner parameters and position relationship between cameras needn't be solved and need build a reflection relation between 2D coordinate of protection and 3D coordinate of observed point. Because of manufacture error and assembling error lying in camera optical system, real image of object protected on camera image plane and ideal image lie in optical distortion errors. Lens distortion error can reduce camera calibration precise resulting in measurement precise reducing. So we have to consider lens distortion when calibration measurement. With CCD camera binocular calibration, we provide an improved calibration method based on Tsai's calibration method. The experiment results show the method provided in the paper is simpler instead of reducing calibration precise.3D coordinate of point and size of object are measured. Getting 3D coordinate of point is an important problem in computer vision field and a critical step finishing simulating human's eyes function with computer. Only using 3D spatial measurement, we can restore 3D stereo information of object from 2D depth information of image coordinate lost. In stereo vision technique, 3D spatial measurement is inverse course of camera calibration in fact. At first, we can get inner and outer parameters calibrated by CCD camera. Then, we can correct camera parameters distortion. Finally, we bring camera calibration parameters into the model of 3D measurement in order to restore 3D point. 3D measurement of point is untouched and high precise. It is applied to measure parts size.
Keywords/Search Tags:computer vision, edge detection, binocular vision, calibration, distortion, 3-dimention measurement
PDF Full Text Request
Related items