Font Size: a A A

Performance Optimization Of Non-linear Classifier

Posted on:2016-03-06Degree:MasterType:Thesis
Country:ChinaCandidate:M Z LuFull Text:PDF
GTID:2298330467477359Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The kernelization of the classifier constitutes a feature space, in which those nonlinear separable patterns in the original input space can be recognized more easily. In particular, the introduction of multiple kernels enables the classifier to depict the patterns from different perspectives. Compared to single kernel classifier, the multiple kernel one can extract more information of the original patterns, especially when they are heterogeneous. In addition, it is less sensitive to the kernel parameters, therefore, easier to gain a better recognition result. However, the kernelization generally maps the input patterns into a feature space with a (much) higher dimension, which leads to a high time and space complexity.Thus, in this paper, we aim to optimized the performance of the multiple kernel machines without lose much recognition rate simultaneously. Two different algorithms are proposed. The first one is named as reduced multiple empirical kernel learning machine (RMEKLM), and the second one cost-sensitive multi-view learning machine (CMVLM). RMEKLM is based on the empirical kernel mapping, which maps the input space into an orthonormal subspace of the feature space so that reduces the dimension. While, CMVLM is based on the implicit kernel mapping, which reduces the time and space complexity by optimize the combination weights of each kernel space. The main contributions of this paper are listed as follows:1. In RMEKLM, through mapping the data set from the input space into an orthonormal subspace of the feature space, the geometry structure is visualized. Because of the lower dimension in the subspace, RMEKLM can reduce both the time and space complexity. The Gauss Elimination is adopted to generate the orthonormal basis, which is efficient and effective.2. CMVLM first proposes a view-dependent cost, which is different from the existing class-dependent cost and example-dependent cost. By combining it with the discriminant s-catter, CMVLM can effectively measure the contribution of each kernel space. Finally, we only reserve the useful kernel spaces to conduct the subsequent learning and testing processes, which significantly reduces the time and space complexity. Moreover, rather than the multiple kernel framework used in this paper, CMVLM can be widely applied to other ones easily.In the experiments, some off-the-shelf multiple or single kernel learning machines are chosen to verify the effectiveness and efficiency of these two algorithms on a broad range of data sets including the benchmark UCI, Image and bioinformatics.
Keywords/Search Tags:Non-linear classifier, Feature mapping, Kernel method, Empirical kernel map-ping, Implicit Kernel Mapping, Multiple kernel learning, Dimension reduction, Cost, Timeand space complexity
PDF Full Text Request
Related items