Font Size: a A A

Pattern Classification Based On Kernel Methods And Stochastic Learning Of The Cumulative

Posted on:2009-07-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:F ZhaoFull Text:PDF
GTID:1118360242478268Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Pattern classification is one of the most important issues in the research area of pattern recognition and is applied to many scientific areas. This dissertation contributes to pattern classification in two main aspects.Firstly, the dissertation deals with kernel methods, which have received intensive attention from the pattern classification community. The main contributions of this aspect are outlined as follows:(1)It is well known that the performance of kernel methods is strongly influenced by kernel parameter. How to select the optimal parameter is a key and difficult problem in kernel methods. In this paper, a novel parameter-optimizing approach is presented based on the idea that the optimal parameter should make the space structure of the data, which were mapped onto the feature space by the training samples, satisfy the demand of corresponding linear algorithm。How to describe the space structure of the entire mapped data and how to estimate the degree of the structure closing to the demand of corresponding linear algorithm are the key problems of the new idea. Based on the idea that the structure of the entire data can be captured by the orthogonal basis of the sub-space spanned by the data, a method for describing the structure of the entire mapped data is presented, which avoids the problem that the mapped data can not be expressed in explicit form. At the same time, based on the maximum-entropy nongaussian measurement, a new criterion for estimating the degree of a data distribution closing to hyper-sphere is presented, which is used to select the suitable kernel parameter in the algorithm of support vector domain description.(2) Many kernel methods may suffer large computation complexity and the slow speed of feature extraction under the condition of large number of training samples. To tackle these problems, a fast method is presented based on the idea that the optimal projection vector can be expressed using the basis of the subspace spanned by the training samples mapped onto the feature space. Moreover, an optimized algorithm is proposed based on the theory of linear correlation for searching a basis of the sub-space spanned by the mapped data.The relevant fast algorithms for kernel Fisher discriminant analysis and kernel principal analysis are proposed according to the presented method, which reduce the computation complexity from O (n~3) to O(r~3) and obviously improve the speed of feature extraction, where is the number of all training samples and is the number of the basis, respectively.(3) How to extract effective discriminant features is one of the challenging tasks of pattern classification. In this paper, a novel method for extracting nonlinear discriminant features is presented based on kernel idea, which is named as kernel optimal transformation and cluster centers algorithm (KOT-CC). KOT-CC is a powerful technique for extracting nonlinear discriminant features and has great efficiency in solving the pattern recognition problems which have serious overlap between the patterns of different classes.Secondly, the paper deals with the radar target recognition problem using high resolution range profile (HRRP), which is an important applied domain of pattern classification. The main contributions of this aspect are outlined as follows:(1) In order to solve the problem of model mismatch under the condition of using the parametric probability density estimation method, a nonparametric method, i.e., stochastic learning of the cumulative (SLC), is presented for the probability density estimation of HRRP. Experimental results using outfield real data demonstrate the validity of the proposed learning algorithm.(2) The idea of compounding parametric and nonparametric probability density estimation approaches is presented for the probability density estimation of HRRP for the first time. In addition, the experiment results of Gamma-SLC, which is a method based on the presented idea, show that, compounding parametric and nonparametric probability density estimation approaches can complement the shortcomings each other, which leads to an improvement of recognition rate comparing with only using one of the two probability density estimation approaches.(3) A new criterion for evaluating the estimation of probability density is designed using maximum-entropy nongaussian measurement. Compared with conventional criterions, the presented criterion is easy to be operated and can be applied to other research areas of probability density estimation.
Keywords/Search Tags:Kernel function, Support vector domain description, Kernel principal component analysis, Kernel fisher discriminant analysis, High resolution range profile, Stochastic learning of the cumulative
PDF Full Text Request
Related items