Font Size: a A A

Research On Error Bound Theory And The Statistical Feature Of Machine Learning Algorithms

Posted on:2017-04-13Degree:MasterType:Thesis
Country:ChinaCandidate:J J ZhouFull Text:PDF
GTID:2348330563450521Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
How to analyse the generalization performance of the proposed algorithm is the main problem in machine learning research.In the field of machine learning,the generalization error is used to measure the learning ability which is applied to the unknown data.We hope to find an upper bound of the generalization error,and use it to guide us to choose the model,and can get a better generalization ability of learning machine.Given all of this,this article mainly focuses on the study of the upper bound of generalization error and we limit the scene in the domain of adaptive regression learning and online learning.Domain adaptive learning can be used to solve the problem which the training sample and test sample is inconsistent with the probability distribution.Also,the online learning is an important branch of machine learning;it is a kind of real-time interactive dynamic machine learning methods.Firstly,the basic learning theory of generalization error bound is illustrated.And we reviewed the domain adaptation learning and online learning.i.e.,we respectively introduce the learning theory and the current popular research methods.We summaries the problems of the generalization error bound which needed to further discussion in the two scenes.By applying the idea in the domain adaptive classification to the regression scene,we resolve the upper bound of the error loss in the scene of domain adaptive regression.Finally we get a new upper bound of the error by using the error in the source and the domain discrepancy distance between the source domain and target domain.A novel method of online learning is developed,which can be applied to the regression learning.The adaptive control theory provides us with a new idea,and the Lyapunov stability theorem provides us with a complete theoretical basis.We establish objective function similar to that of the gradient descent algorithm,by using the stability theorem of Lyapunov function;finally we get a new update rule of the weight,and also the specific boundary of the error and the cumulative error.Finally the experiments on the artificial generated data sets and the UCI data sets also prove the effectiveness of the online learning algorithm.When it comes to learning of true model is the time-varying model of changes over time,we need to consider whether the learning algorithm still has a better learning effect on the time-varying model,and how to set the learning rate to improve the effectiveness.Namely we study on the online learning algorithm in the migration experts' scenario.Finally we give a specific adjustment method of the learning rate in the migration experts' scenario and the algorithm also be verified by experiments.Finally,the main work are summarized,and some issues need to be further studied are pointed out in the domain adaptive regression learning and online learning.
Keywords/Search Tags:the Generalization Error Bound, Domain Adaptation Learning, Online Learning, Complexity Metrics, Adaptive Control, Expert Learning
PDF Full Text Request
Related items