Font Size: a A A

The Selection For The Kernel-based Method

Posted on:2009-04-16Degree:MasterType:Thesis
Country:ChinaCandidate:X Y QiuFull Text:PDF
GTID:2178360242994747Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
In the 1980s, the pattern analysis domain has experienced "the non-linear revolution": Support vector machine (SVM) is first proposed as the Kernel-based methods to get rid of the linear constraints. Subsequently, the kernel-based methods are expanded to other pattern analysis fields except the classification field. In this way, all the new technologies driven by strict theoretical analysis are developed under the guarantee of efficiency calculation.The Kernel-based analysis is a powerful new tool for the mathematicians, scientists and engineers. It provides rich methods for pattern analysis, signal processing, syntactic pattern recognition and other pattern recognition fields. Accompanying with the application of kernel method's success, the research on kernel-based learning algorithms and their applications is one of the hot problems and mainstreams in machine learning works.As a new technology, the choice of kernel parameters plays an important role on the kernel classification. Many people have begun to pay more attention to this technology. How to find the best parameter becomes a hot research topic, and a variety of Kernel selection algorithm, which could significantly increase the efficiency of a kernel-based learning algorithm, is attracting more attention.Based on the Summarization of kernel clustering algorithms, this paper focuses on studying the kernel parameter selection method and its application problems. We use the objective function to select optimal kernel parameters for the KNN and kernel-based K-demiods classification, and testify the efficiency of the selected parameters through a set of experiments. The paper also provides the advantageous aspects for choosing the optimal parameter of the kernel function in widespread application fields.The main contributions of this paper are listed as:1. Review the basic theory of kernel division and the main algorithms for clustering. This paper systematically and comprehensively reviews the previously fragmented kernel-based theory, several kinds of kernel function, as well as construction of new kernel functions.The main clustering algorithms are as follows: K-means algorithm, the kernel-based fisher discriminant analysis, the kernel-based Perceptron.2. Propose Objective functions for KNN selection parameters of two approaches.The traditional algorithms of choosing optimal kernel parmeters are high time-consuming and expendable space-consuming, and capacity of expansion is limited. In this paper, in light of features of the minimum distance in the kernel classification and distribution of data in the new space, we design objective function algorithm to choose optimal parameter for the kernel-based nearest neighbor algorithm. The parameter improves the correct classification rate significantly. The objective function is used to reduce the number of computing and simplify computing process, thereby significantly improving the efficiency of the kernel algorithm with the new kernel parameter.3. Propose the objective function for the kernel based on the K-demiods selection of parametersThe Kernel-based K-demiods is an improved method of the KNN algorithm. The existing kernel-based classification focuses on the adoption of kernel mapping to improve the rate of correct classification, but the correct classification rate is directly linked to the parameter selection. Through describing the expectations of distribution of data points in the new feature space, we choose the optimal parameters through minimizing the objective function, thus improving the performance of the kernel-based classification. The experiments show the effectiveness of the algorithm on the artificial data sets and UCI data sets.4. Summarize the kernel parameter selection methods, and compare their effecitaveness through experimental validation.This paper describes the choosing parameters of the kernel-based classification, for example: Cut-and-try and so on. These algorithms play an essential role in guiding the selection method of the Kernel parameters.The paper also provides the related experimental validation and particular experimental analysis.
Keywords/Search Tags:Support Vector Machine, K-means algorithm, fisher discriminant analysis, Perceptron
PDF Full Text Request
Related items