Font Size: a A A

Research On Image Feature Extraction And Matching Technology And Its Application In The Object Recognition

Posted on:2008-07-26Degree:MasterType:Thesis
Country:ChinaCandidate:X S LuFull Text:PDF
GTID:2178360212996077Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Image feature extraction and recognition are greatly significant in thecomputer network, computer graphics, computer vision, pattern recognition,topography,artificialintelligenceandotherfields.In order to resolve the problem of feature points, it usually takes threesteps: (1) the detection of image features; (2) a valid description of the featurepoints; (3) reasonable matching among descriptors of feature points. Thispaper describes the state-of-art of the feature detection and feature descriptionin detail, and defines the criterion for evaluating feature descriptors. Weanalyze the disadvantages of the SIFT descriptor, and overcome itsshortcomings with this criterion. However, both the theory and experimentaldata show that the performance of the feature descriptor is significantlyimproved. Taking the shortcomings of local descriptor into account, wepropose the idea of global matching strategy. The global image information isaddedduringthematchingprocedure.Itiswellknownthatthelocaldescriptoris lack of global information and our method can overcome this problem. Sothe performances of descriptors have been significantly improved in theaccuracy of matching. Finally, the descriptors are applied to the objectrecognition application. The idea of the object recognition is based on thestatistics view. Based on the analysis above, the following several aspects arestudied.1. Detection and description of the features.Image features always changes with the change of geometric and physicalcharacteristics. There is some special information in images called imagefeatures and the images features make the image distinctive from each other.There aremanytypes of image features, for example: color,texture,shape andspatial relationships of features. Feature extraction is widely used in manyapplications, what we concerned determines which kind of features to beextracted. The main purpose of feature detection is to determine parameters,which mainly include location, scale and other useful information. The mainpurposeofthefeaturedescriptionistotakeareasonabledescriptionforfeaturepoints and find out the corresponding relation among feature points for manyimages. Since feature varies with the feature scale, we need to determine thefeature scale so that it is possible to detect the same feature between twoimages. DOG establishes a scale space, then determines the locations andscales of the features. One of the purposes of extracting feature points is to getthe relationship of the images and other useful information by matching thefeatures in a lot of images. And the matching method is usuallysum-of-square-differences (SSD) or the normalized cross-correlation (NCC).NCC has the advantage of withstanding the changes in. brightness andcontrast.2. The evaluation criterion for the descriptors.Excellent stability and distinction are important to descriptors. In thetransforming process, the stability of the descriptors is proportional tooverlapping area proportion in the corresponding region. Performance of adescriptor can be described as a nonlinear function of the stability anddistinction. So, descriptors are as a result of the synthesis effect based onstability and distinction.3. Improvement of SIFT descriptor.To enhance the calculating speed in constructing scale space, Lowintroduced SIFT feature extraction algorithm. The algorithm can be brieflydescribed as follows: the algorithm can be used to establish a group ofmulti-scale images, and fix the scales of the features while selecting thelocations. Feature scales are chosen on the basis of the continual function andideal data. But the Gaussian-kernel function and image convolution often useoperations including interpolation, discretization, and so on. And the discretedata is actually affected by noise. Therefore the detected discrete scales areinaccurate. According to the evaluation criterion of descriptors, as SIFTdescriptor doesn't adapt to the error of scale, a more reasonable descriptor isintroduced to enhance the stability of descriptors and the accuracy of matching.To prove this conclusion, a lot of experiments have been done with a greatdeal of different images. At last the algorithm is proved to be right both bytheory and large numbers of experiments.4. Introduce the global matching algorithm for local descriptors, andoptimize the matching result.Local descriptors can denote the characteristics of local areas well andthey can be calculated within a relatively small computing time, but they canonly denote local features, and there is no global information in them.Experiments show that if the global information is added during the matchingprocess, the matching results of the local descriptors will be significantlyimproved. The thesis does research mainly in two kinds of global informationthat used in matching: (1) after two-dimensional similarity transformation, therelative excursion angle at the main direction remains the same for eachdescriptor; (2) after the two-dimensional affine transformation, if the featurepoints which are lacking in consistency are excluded, the matching result willbe improved. At last, when global information is applied to the imagematching experiments, whether the initial matching accuracy is high or low,both the theory and experiment show that the results can be significantlyimproved.5. An object recognition application with feature points.The steps of object recognition are done as follows: features detectionand descriptors extraction; descriptors are clustered to form the codebook; theprobabilistic latent semantic analysis. In this paper, we analyze probabilisticlatent semantic analysis algorithm, the vector quantization algorithm and theexpectation maximization algorithm in detail. The probabilistic latent semanticanalysis algorithm is a method which is used to analyze data relations betweenthe two data sets by probabilistic model to complete learning and recognitionof natural scene types. The basic principle of vector quantization algorithm isthat a lot of data are transformed into a vector data which is quantified, so datacan be compressed without loss of too much information. EM algorithm is amethod of a maximum likelihood estimation which is used to estimateparameters. This method can be widely applied to the analysis of uncompleteddata set. The threshold in the experiment was selected with ROC curve.
Keywords/Search Tags:Application
PDF Full Text Request
Related items