Font Size: a A A

Asymptotic Convergence Of Regularization Learning Algorithms Based On Weakly Dependent Samples

Posted on:2012-10-05Degree:MasterType:Thesis
Country:ChinaCandidate:Q GuoFull Text:PDF
GTID:2120330335479782Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Since learning theory was born, it has rapidly developed into a subject of both theory and application, and its studies in theory and application have made great achievements. It tries to provides mathematical foundations of many algorithms for learning from examples. And its core problem is consistency and learning rate of all kinds of learning algorithms. Consistency is a qualitative analysis, while learning rate is a quantitative analysis.In this paper, we mainly studied regularized least square learning algorithm associated with reproducing kernel Hilbert spaces. These algorithms are based on convex risk minimization with Tikhonov or Ivanov regularization. Here, we mainly study the asymptotic convergence and the learning rate of regularized learning algorithms based on weakly dependent samples. Integral operator and sample operator techniques can been often used. The paper is structured as follows:In Chapter 1, we introduce the research background and basic knowledge of statistical learning theory, including the appearance of regularized learning algorithm, the current research status of regularized learning algorithm based on kernel, the main research contents of this paper , basic definitions and lemmas.In Chapter 2, we study the asymptotic convergence of coefficient regularization based on the weakly dependent samples.We mainly give the asymptotic convergence of coefficient regularization with l 2norm based on two different kinds of sampling.In Section 1, we consider the asymptotic convergence of least square coefficient regularization based on the weakly dependent samples. When strong-mixing coefficients satisfy a polynomial decay, the error bounds and learning rates are derived by means of the integral operator and sample operator techniques. Moreover we make some discussions by comparing with the learning rates derived from regularized least square regression algorithm.In Section 2, we study coefficient regularization based on a more general sampling setting. A non-iid setting is considered, where the sequence of probability measures for sampling is not identical but the sequence of marginal distributions for sampling converges exponentially fast in the dual of a Holder space; the sampling zi , i≥1 are weakly dependent, which satisfy a strongly mixing condition. Satisfactory capacity independently error bounds and learning rates are derived by integral operator techniques.In Chapter 3, we study capacity dependent estimations by integral operator techniques.In this chapter, we study the learning performance of regularized least square regression based on two capacity conditions responding to integral operator LK . An iid setting is considered, where the sequence of probability measures for sampling is identical and the samples are independently drawn. We improve the learning rate toο( m ?β/ (1+ 2β))by integral operator techniques.In Chapter 4, we study improved asymptotic analysis of least square regression learning algorithm.In this chapter, we study the performance of least square regression and coefficient regularization with l 2norm. We utilize integral operator techniques and a Bennett inequality for random variables with values in a Hilbert space to improve the error bounds and learning rates.
Keywords/Search Tags:least square regression, integral operator, strong mixing sequence, sample error, approximation error, learning rate
PDF Full Text Request
Related items