Font Size: a A A

Statistical Learning Of Error Entropy

Posted on:2022-09-27Degree:MasterType:Thesis
Country:ChinaCandidate:Z H WuFull Text:PDF
GTID:2480306479487064Subject:Probability theory and mathematical statistics
Abstract/Summary:PDF Full Text Request
In the era of information data,attention is also paid to data processing,especially data analysis and prediction.Data-based machine learning seeks out the law of samples by observing the data,thereby estimating the overall law to deal with unknown data.Make predictions.However,due to the limited characteristics of real life sample data,statistical learning theory specializes in the study of machine learning laws in the case of small samples.With the continuous development and maturity of its theory,statistical learning theory has begun to receive widespread attention and research.The richness of theories makes it More areas have been applied to promote the development of national science and technology.This paper studies the statistical learning theory of minimum error entropy(MEE).Through data simulation,it shows that the least squares has large deviations in the face of outliers in the data.Through the use of weak moment conditions,the use of empirical risk minimization and relaxation of Bernstein Conditions,a comparison theorem is proved,which clarifies the consistency and convergence rate of the minimum error entropy algorithm.Specifically,the main work of this article is as follows;(1)The accuracy of MEE estimation and least square estimation under non-Gaussian distributed noise is compared,combined with non-parametric regression model,the comparison theorem under the condition of order(1(10)?)weak moment is established,and the relationship between overgeneralization error and prediction error is established.relationship.(2)Use the relaxed Bernstein condition to prove the concentration inequality,and study the change of the difference under the weak moment condition,and the variance directly affects the size of the error bound.(3)Explains the learning rate of the MEE algorithm under weak moment conditions,especially at that ?(27)1 time,because in this case,the noise may even be infinite variance.Furthermore,the convergence rate of the MEE algorithm in the case of heavy-tailed distribution or divorced values is proved.
Keywords/Search Tags:statistical learning, entropy, experience minimization, moment condition, concentration inequality
PDF Full Text Request
Related items