Font Size: a A A

Error Analysis Of Classification Learning Algorithm Based On LUMs Loss

Posted on:2024-09-18Degree:MasterType:Thesis
Country:ChinaCandidate:X Q HeFull Text:PDF
GTID:2568306938950759Subject:Mathematics
Abstract/Summary:PDF Full Text Request
In the Internet age,the analysis of big data is becoming more and more important,due to the characteristics of data clutter and large quantity,the difficulty of data classification is getting higher and higher,so the research on classification problems has received widespread attention.The basic idea of classification learning is to use classifiers to distribute feature tags for data sample,and obtain the optimal classification model by training labeled samples,so as to predict the classification of unlabeled samples.With the continuous emergence of high-dimensional and low sample data,support vector machines and linear discriminant analysis both encounter data accumulation problems when processing such sample data.DWD can avoid data accumulation problems when dealing with high-dimensional and low sample data,but its classification performance is not as good as that of support vector machines and linear discriminant analysis when dealing with other sample data.LUMs(Large margin Unified Machines)loss can be transformed into hinge loss and DWD optimization algorithms by adjusting parameters.Therefore,LUMs cover the advantages of support vector machines and DWD optimal algorithms.LUM can transform different classification algorithms when facing different classification problems and solve the problem of high-dimensional and low sample data accumulation.In the Reproducing Kernel Hilbert SpaceH_K,this paper studies the error analysis of the classification problem based on regularization LUMs loss.The error analysis of the classification problem depends on the comparison theorem of the excess classification error limited by the excess generalization error.Base on the comparison theorem,the satisfactory error bounded and learning rate are derived by integrating operators and sample operators by using the analysis method of Leave one out for the learning of the misclassification error rate of LUMs under binary classification.The content of this article is mainly divided into:In Chapter one,this paper introduces the history and the development of statistical learning theory,analyzes the background and significance of the topic selection,as well as the current research status of LUMs.In Chapter two,this paper systematically introduces current research status of classification problems,and analyzed some algorithms for classification problems and their advantages and disadvantages.In Chapter three,this paper introduces the convex loss functions of Large-margin Unified Machines(LUMs).By adjusting the parameters of LUMs loss,the LUMs loss function can be transformed into a variety of convex loss functions.And especially analyzes the optimization algorithms equivalent to DWD when the parameter in LUMs loss is equal to1,because the DWD optimization algorithms can effectively avoid the data accumulation problem of high dimensional and low samples,which is the reason why the regularized classification algorithm based on LUMs loss has attracted attention.In Chapter four,this paper introduces the error analysis of the regularization learning algorithm based on LUMs is studied by the analysis method of Leave one out,and the error bounds and learning rate of the binary classification problem of LUMs loss are obtained.In Chapter five,the summary and further research prospects are introduced.
Keywords/Search Tags:classification problem, coefficient regularization, cearning rate, statistical learning theory
PDF Full Text Request
Related items