Font Size: a A A

Consistency Of Regularized Boosting Algorithms

Posted on:2017-12-15Degree:MasterType:Thesis
Country:ChinaCandidate:X L CaiFull Text:PDF
GTID:2348330485979282Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
In the passing decades,there is a common method to improve the accuracy of any given learning algorithm in the field of machine learning.It produced a very accurate "strong" prediction rule by combining some rough and moderately inaccurate"weak" rules of thumb in a manner.This method is the Boosting algorithm.For such a framework algorithm,once proposed,it has been applied in many areas.Such as handwriting recognition,face detection,text classification,sequence analysis of DNA and so on.Starting from the basic idea of Boosting algorithm,this paper first introduces the basic thought of this algorithm and domestic and foreign research situation,and some classical deformation of Boosting algorithm has been described briefly.Because most of the traditional learning algorithms are based on the ERM(empirical risk minimiza-tion)principle,but the ERM principle is not suitable.There will be an overfitting phenomenon(that is,too complex function to fit the case of a limited sample,which leads to bad generalization).In order to avoid such problems,we study the Boosting algorithm under the framework of the regularization.Then according to the situation of independent identically distributed(i.i.d.)samples,it is proved that the Boosting al-gorithm is consistent.Because no matter in theory,or in practice,the i.i.d.condition or hypothesis is very strong.So aiming at the non i.i.d.,that is,the case of mixed sequence samples,we proves that the regularized Boosting algorithm based on mixing sequence is also consistent.
Keywords/Search Tags:Regularization, Boositing, Consistency, Mixing
PDF Full Text Request
Related items