Font Size: a A A

On The Error Analysis Of Coefficient Regularized Scheme

Posted on:2012-05-05Degree:MasterType:Thesis
Country:ChinaCandidate:L Z FengFull Text:PDF
GTID:2178330335978426Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Learning theory is a method of searching for a function to approximately predict the unknown future data from the observation data (sample). There are two typical schemes. One is classification learning and the other is regression learning. Batch learning and online learning are two basic schemes constructed for sample analysis or more application. According to the solution expression theorem, both of the two schemes may be simplified as the coefficient regularization scheme, which is an optimization problem defined on finite dimensional Euclidean space.This paper is about the convergence analysis of classification learning and regression learning associated with coefficient regularized algorithms. The subject of thesis consists of two parts, first of which is online classification learning. Us-ing nonsmooth analysis and convex analysis, we prove the strong convergence of learning sequence when step size is selected in general, and draw the explicit learn-ing rates when given a special step size. Besides, coefficient regularized algorithm associated with least-square loss function for regression problem is investigated in part two. By decomposition, error bound is determined by regularized parame-ter. When choosing an appropriate parameter, we balance.the bias-variance and obtain the optimization scheme.
Keywords/Search Tags:coefficient regularization, classification learning, regression learning, gradient descent methods, bias-variance, reproducing kernel Hilbert space
PDF Full Text Request
Related items