Font Size: a A A

Multi-kernel Classifier Design Based On Reducing Complexity And Cost Sensitivity

Posted on:2014-01-13Degree:MasterType:Thesis
Country:ChinaCandidate:Z X NiuFull Text:PDF
GTID:2248330395477852Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Multiple kernel learning algorithms can effectively solve the problems that the traditional single kernel learning algorithms can not be effectively solved, such as heterogeneous infor-mation and un-normalized data, non-flat distribution of samples. Meanwhile, Multiple kernel learning has been payed much attention for its better flexibility and the preferable interpre-tation for samples inherent information. However, we should not ignore the computational and spacial complexity that occurs when we use multiple kernel learning algorithms to design classifiers. In the meantime, the most of existing multiple kernel learning algorithms do not evaluate and measure the contribution of each candidate kernel matrix which is used in re-constructing the optimal fusion matrix. Using all candidate kernel matrixes to construct the classifier can not only increase the computational complexity but also affect the rationality of the classifier to some extent. For those problems we mentioned above, this paper put forwards a series of multi-kernel classifier designing algorithms that are based on reducing-complexity and cost-sensitivity to optimize the efficiency of multiple kernel learning algorithms. We verify the effectiveness of those algorithms by a great deal of simulation experiments. The main work of this paper is as follows:(1) According to the framework of multiple kernel learning, we can map the original data sets into different kinds of feature space by choosing various kernel functions. In each feature space, we optimize the primordial kernel matrix by using Nystrom matrixes approximat-ing technique. After gaining the approximate candidate kernel matrix, we are able to compute the coefficient of each candidate kernel matrix on the basis of approximating er-ror between the primordial and the approximate candidate kernel matrix. Meanwhile, we may obtain the fusion kernel matrix by means of computing the convex combination of those approximate candidate kernel matrixes. Finally, we introduce the fusion matrix into the KMHKS algorithm and design the optimal multiple kernel learning classifier which is named as NMKMHKS. In order to verify the validity and feasibility of the classifier, we design a lot of experiments on synthetic data sets and UCI benchmark data sets. The experimental results verify the efficiency of this method.(2) In the paper, we evaluate and measure the contribution of each approximate candidate kernel matrix which is used in constructing the final optimal fusion kernel matrix. With the condition that guarantee the classification performance of the classifier, we introduce cost-sensitive into NMKMHKS algorithm to eliminate the useless approximate candidate kernel matrix by setting different kinds of thresholds and name this method as CMVLM. In this way, we can reduce the complexity of NMKMHKS algorithm. We design a good deal of comparative experiments with the original method in various thresholds on UCI benchmark data sets, Image data sets and Biological data sets to prove the efficiency of this idea.(3) With the guidance of using cost sensitive into implicit kernel mapping, we bring it into empirical kernel mapping and come up with a cost-sensitive multiple empirical kernel learning algorithm which is named as CRMEK-MHKS. In CRMEK-MHKS, we adopt empirical kernel mapping instead of the traditional implicit kernel mapping to realize the mapping process which maps the data sets from low-dimensional space into high-dimensional space. The experimental results demonstrate its efficiency to some extent.Those algorithms improve the efficiency of multiple kernel learning algorithms and re-duce the computational and spacial complexity at the same time. Meanwhile, evaluating the contribution of each candidate kernel matrix by introducing the cost sensitive into multiple kernel learning algorithms can further optimize the designed classifiers with the condition of guaranteeing and improving the classification performance.
Keywords/Search Tags:Multiple Kernel Learning, Classifier Designing, Nystr(o|¨)m Matrix Approximat-ing, Pattern Recognition, Cost Sensitive, Empirical Kernel Mapping
PDF Full Text Request
Related items