Font Size: a A A

Multi-kernel Regularization Learning And Correlative Problems

Posted on:2012-11-22Degree:MasterType:Thesis
Country:ChinaCandidate:P LiuFull Text:PDF
GTID:2120330335479776Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Regularized algorithm based on kernels is one of important methods in statistical learning theory, which has widely applications in different fields such as computer, economy and biology. Consistency analysis and error bound estimation play the crucial role in the reliability and complexity of algorithms. In some applications, the learning algorithms with a single kernel may not be satisfactory. Therefore, the hypothesis space based on multi kernels is proposed. It can not only overcome the problem that the approximation ability of RKHS for a single kernel is not powerful, but also avoid the problems of complex computation brought by kernels with low smoothness. In recent years, multi-kernel learning attracts more and more attention. Extensive researches about it have been done and some results have been presented. In this paper, we mainly learn multi-kernel regularization algorithms based on RKHS. The paper is structured as follows:In Chapter 1, we introduce the background, the empirical risk minimization principle, the regularization algorithm and the reproducing kernel Hilbert space, main results of multi-kernel problems. Meanwhile, we give some basic knowledge and main works in this paper.In Chapter 2, strictly positive definite functions on the circle are studied. First, we introduce the definition of strictly positive definite functions, and discuss the prevenient sufficient condition and necessary condition about it, where refer to the conceptions of Kronecker approximation property and ubiquitous modulo. Second, we introduce the definition of rank n strictly positive definite in S 1. By the equivalence between it and strictly positive definite, and the sufficiency and necessary condition for rank n strictly positive definite, we prove the necessary property of ubiquitous modulo condition. However, whether the ubiquitous modulo condition is the sufficient and necessary one still is an open problem. Lastly, we discuss the sufficiency for ubiquitous modulo in many cases.In Chapter 3, the optimal solution of multi-kernel regularization learning is studied.In Section 1, the backgrounds of multi-kernel problem are introduced. We definite a map from kernel function space to the positive semi-definite matrix space, and multi-kernel regularized algorithms are considered as a map defined on the kernel space. In Section 2, endowing with a topology to the kernel space, we prove that the above map is continuous by means of the operator approximation theorem, so we get the main result: the optimal solution exists when kernel space is compact. It generalizes C.A.Micchelli's related conclusion. In Section 3, we consider this problem for Gaussian multi-kernel regularized algorithm which is not compact in kernel space. By the represent theorem, we transform this problem into the problem about the least value of some function. At last, we get two sufficient conditions under which the optimal solution exists or not exists.In Chapter 4, regularized least square algorithm with two kernels is studied.This chapter studies the consistency and error estimate of regularized least square algorithm with two kernels. For multi-kernel learning algorithm, the error bounds and learning rates have not been studied much. Wu Q, Ying Y M, Zhou D X studied error analysis and consistency for multi-kernel classifier. In Section 1, we give the element- ary frame: the hypothesis space is generated by two Mercer kernels. The error is divi- ded into approximation error and sample error, and then we give the main theorem. In Section 2, we prove the represent theorem about the algorithm and norm estimate of algorithm solution. In Section 3, we get approximation error by means of integral ope- rator. In Section 4 and 5 we estimate two parts of sample error by one-side Bernstein inequality and covering number respectively, so we prove the theorem mentioned in Section 1. At last, we compare it with learning rate of single kernel algorithm. In most cases, multi-kernel algorithm is better than the single kernel one. Although we discuss the two kernels, our method is still applied to the case of finite kernels.In Chapter 5, we summarize our work, and give our future works.
Keywords/Search Tags:Multi-kernel regularization, strictly positive definite functions, optimal solution, covering number, learning rate
PDF Full Text Request
Related items