Font Size: a A A

Error Analysis Of Kernel-based Regularization Learning Algorithms With Parameterized Loss Function

Posted on:2021-04-04Degree:DoctorType:Dissertation
Country:ChinaCandidate:S H WangFull Text:PDF
GTID:1368330623458720Subject:Statistics
Abstract/Summary:PDF Full Text Request
With the development of information technology and artificial intelligence,the research on machine learning has recently gained more and more attention in the field of scientific research.It is one of the important research contents of statistical learning theory to analyze the error of learning algorithms quantitatively.In recent years,some studies have shown that the adjustable loss function with some parameters has a good effect on the learning performance of the kernel-based regularization learning algorithms.In this paper,we mainly investigate some kernel-based regularization learning algorithms with a parameterized loss function,the error bounds of the learning algorithms are explicitly obtained,and the adjustment of parameters to the learning performance of the algorithms is analyzed.All of the research results can be described as follows.1.In practical applications,the performance deterioration caused by the outliers can not be ignored.Sometimes even a singular sample may have a great influence on the learning effect of the learning algorithm.In this paper,to alleviate the performance deterioration caused by the outliers,the robust kernel-based regularization learning algorithm with a parameterized loss function is investigated.For the loss function with homotopy parameters is a quasiconvex function,the theory analysis for its performance can not be finished by the usual convex analysis approach.An analysis method based on the quasiconvex analysis theory is developed to estimate the error.An explicit error bound is provided,and the effect degree of outliers on the performance is quantitatively shown.2.We consider a kernel regularized classification algorithm with a parameterized robust loss function which is proposed to alleviate the classification performance deterioration caused by the outliers.A comparison relationship between the excess misclassification and the excess generalization error is provided,with which and the convex analysis theory,a kind of error bound is derived.The results show that the performance of the classifier is effected by the outliers,and the extent of impact can be controlled by choosing the parameters properly.3.Regularized ranking algorithms based on kernel have recently gained much attention in statistical learning theory,and pairwise learning is the generalization of ranking problem.In this paper,a kernel-based regularized pairwise learning algorithm with a parameterized quasiconvex loss function is provided,the error estimate is given by using the quasiconvex analysis theory and the inequality for U-statistics,an explicit error bound is obtained.It is shown that the sample error is influenced by the parameters in the loss function.The experimental results show that the proposed algorithm is more robust than the ranking algorithm with the least square loss function.4.Online learning is a new machine learning algorithm adapted to the era of big data,which has received considerable attention due to its computational efficiency in tackling large-scale datasets in the big-data era.A kernel-based regularization online learning algorithm with a parameterized quadratic loss function is provided.Under mild conditions on the step sizes,the convergence of the learning sequence is proved in probability,and explicit analysis of the generalization error for the last step is given.It is shown that the error bound can be controlled by the parameter ?.5.The online learning algorithm is applied to the pairwise learning problems,a kernel-based regularized online pairwise learning algorithm with a parameterized quadratic loss function is provided in this paper.By using convex analysis and Rademacher complexity techniques combined with the properties of pairwise reproducing kernels,the convergence of the learning sequence is analyzed,and the error bounds for the last step are provided explicitly.It is shown that the learning rates can be greatly improved by adjusting the scale parameters of the loss function.
Keywords/Search Tags:Regularization, Error analysis, Convex analysis, Online learning, Pairwise learning
PDF Full Text Request
Related items