Font Size: a A A

Convergence Analysis Of Coefficient Regularized Classification Online Algorithms

Posted on:2012-01-22Degree:MasterType:Thesis
Country:ChinaCandidate:M D TianFull Text:PDF
GTID:2178330338493988Subject:Basic mathematics
Abstract/Summary:PDF Full Text Request
Learning theory, which is related many fields such as neural network learning, regression, classification, density estimation and pattern recognition, et al, is a method of searching the best approximation of the unknown probability distribution of sample in feature space. With the rapid development of this theory, it therefore has better applications in data modeling, optimization, classification and predication and therefore has become a new field of being particular noticed.Binary classification is one of the central applications for machine learning methods in general and for SVMs in particular. The goal of classification is to construct, on the basis of independent and identically distribution samples, a classifier which can predict the unknown distribution with small misclassification error. This paper deals with the estimate for the learning rates of the regularized binary classification algorithm. It turns the general classification algorithm into the classification learning algorithm on the finite-dimensional Euclidean space and the rates estimate is then concluded as the convergence rates estimate of the solutions of the statistical programming. The rates are divided into the sample error and the approximation error. The sample error is given by the Markov inequality, the approximation error is shown explicitly. In the first part, we give the process of the gradient descent method on the basic of Euclidean space, which structured an effective classifier when given a large sample size. We assume the derivable loss function is locally Lipchitz at the origin, and give some restrictions on step size ensure the uniform boundedness of learning sequence. Then, excess misclassification error learning rates for some commonly used loss function are investigated explicitly. Besides, we consider the condition of the regular changes with the step, and analysis the convergence of fully online algorithms associated with strong convex loss function.
Keywords/Search Tags:Classification algorithm, online learning, coefficient regularization, error analysis
PDF Full Text Request
Related items