Font Size: a A A

Research On Kernel Methods For Pattern Recognition

Posted on:2008-01-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:X R LiFull Text:PDF
GTID:1118360272477767Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
Pattern recognition is an application oriented subject, its theories and methods have been successfully applied in many areas, and all these applications are consanguineous with the property of idiographic problem. Up to present, no method can be applied to all problems. It is necessary and significative to study pattern recognition method for problems with complex patterns owing to the fact that most practical problems comprise high dimension and complex multi-class patterns.Kernel-based learning method is a new tool developed from statistic learning theory, it has effectively overcome the problems of local minima and overfitting. Kernel-based learning method is essentially a nonlinear information processing tool, and has been proven more effective than other learning methods on the pattern recognition of high dimension complex problem with nonlinear pattern. The research and application of Kernel-based learning method is just in the ascendant and many new algorithms are continually proposed. But as an immature technology, it still has to cope with many problems in the structure, function, classification and so on. Generally speaking, the research in complex pattern recognition based on Kernel learning method is of great significance.The main content of this paper contains kernel-based extraction and pattern classification methods. The main contributions are as following:1. In order to find a better method to compute the optimal discrimination vectors for the kernel-based Fisher discrimination analysis in the singular cases, In this paper, an improved kernel direct Fisher discrimination analysis (IKDDA) is proposed. Based on the theory of reproducing kernel, the kernel within-class and kernel between-class scatter matrices are defined, the Fisher discrimination criterion in the high feature space is transformed to kernel Fisher discrimination criterion. Based on the theory of isomorphic mapping and singular value decomposition theorem, the maximum of the kernel Fisher discrimination criterion can be skillfully acquired by solving the minimum of its reciprocal in a small space, and the final solution is acquired without taking account of the null space and non-null space of the kernel within-class scatter matrices separately. Experiment results on the ORL and the UMIST face image database indicate that the proposed methodology is able to achieve lower error rate and quicker speed compared with other methods.2. A constrained linear discrimination analysis (CLDA) method is proposed for the feature extraction in the pattern recognition of problems with high dimension and small samples. Applying whitening process and Gram-Schimdt orthogonalization and orthogonal subspace projection, an optimal transformation matrix is designed to minimize the ratio of intra-class distance to inter-class distance while imposing the constraint that different class centers after transformation are along specifically directions that are orthogonal each other. For the small sample problem of face recognition, the whitening process is realized by singular value decomposition. Using kernel tricks, the CLDA is generalized to constrained kernel discrimination analysis (CKDA). Experimental results on face images show that both CLDA and CKDA are effective and feasible.3. Designing the hierarchical structure is a key issue for the decision-tree-based (DTB) support vector machines multi-class classification. Inter-class separability is an important basis for designing the hierarchical structure. A new method based on vector projection is proposed to measure inter-class separability. Furthermore, two different DTB support vector multi-class classifiers are designed based on the inter-class separability: one is in the structure of DTB-balanced branches and another is in the structure of DTB-one against all. Experiment results on three large-scale data sets indicate that the proposed method speeds up the decision-tree-based support vector machines multi-class classifiers and yields higher precision.4. An improved k-nearest-neighbor search method based on projection and triangular inequality is presented, some inequalities are used to delete impossible data points and reduce distance computations. An improved method based on vector projection is proposed to address the problem of pre-extracting boundary vectors. A novel data classification method was prompted for the classification problem about samples with known distribution. A nonlinear function was chosen to map the input to a higher-dimensional space, vectors near the boundaries were pre-extracted from the training samples. By the law of large numbers in statistics, the value of the class-conditional probability density function of each boundary vector was estimated by k-nearest-neighbor estimation method. The learning algorithm constructed a radial basis function network with the boundary vectors as the network centers to approximate the class-conditional probability density function of each class of the objects in the training data set. The classification was conformed by the minimum error rate Bayesian decision rule. The experiment on machine learning data sets proved that the proposed algorithm can quickly and effectively carry out data classification with more than two classes of objects.
Keywords/Search Tags:pattern recognition, kernel methods, kernel Fisher discrimination analysis, constrained linear discrimination analysis, constrained kernel discrimination analysis, decision-tree-based support vector machines multi-class classifier, vector projection
PDF Full Text Request
Related items