Font Size: a A A

Research And Application Of Classification Method Of Robust Extreme Learning Machine

Posted on:2022-09-27Degree:MasterType:Thesis
Country:ChinaCandidate:L R RenFull Text:PDF
GTID:2518306323485894Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Extreme Learning Machine(ELM),as an efficient feedforward neural network method,has been extensively developed in the field of machine learning.Compared with traditional single hidden layer feedforward neural networks(SLFNs),ELM has faster training speed and stronger generalization ability.During the training process,the weights connecting the input layer and the hidden layer are randomly initialized.The only parameter that needs to be adjusted is the output weight matrix between the hidden layer and the output layer,and it can be obtained by solving a ridge regression problem.Therefore,ELM has been widely used in various supervised learning and unsupervised learning tasks in recent years.However,the traditional ELM method does not consider the robustness of the algorithm.When the data contain noise or outliers,the performance of ELM is easily disturbed.In addition,the traditional ELM method cannot fully learn the representation information of the original data,and the ability to explore the high-order geometric structure information between the data is insufficient.In view of the above problems,this paper studies and improves the traditional ELM method,and applies the improved methods to the classification problem of real data.Compared with some existing methods,the improved methods have a more ideal classification performance.The specific research content is mainly divided into the following four sections:(1)Based on the L2,1-norm,a robust Extreme Learning Machine method(L2,1-ELM)is proposed:Since the loss function of the traditional Extreme Learning Machine is the square loss,it will amplify the negative interference brought by noise and outliers in the datasets.Therefore,this paper proposes to introduce the L2,1-norm as a loss function into the Extreme Learning Machine,to avoid using the square loss when calculating the total residual,so as to weaken the sensitivity of the method to noise and outliers.(2)Based on the correntropy induced loss,a sparse robust graph regularized Extreme Learning Machine(CSRGELM)is proposed:Compared with L2,1-norm,the correntropy induced loss can not only deal with Gaussian noise in the datasets,but also can deal with non-Gaussian noise.This method introduces the correntropy induced loss instead of the square loss into the original Extreme Learning Machine to further improve the robustness.In addition,the traditional ELM uses the L2-norm to constrain the output weight matrix,without considering the structured sparseness of the matrix.Therefore,this paper uses the L2,1-norm to constrain the output weight matrix to realize the feature selection of hidden layer nodes and simplify the neural network model.At the same time,the introduction of the graph-Laplacian regularization can improve the ability of the ELM to learn the manifold structure information between the data,which helps to improve the classification performance.(3)Based on the kernel risk sensitive loss(KRSL),a hyper-graph regularized robust Extreme Learning Machine(KRSL-HRELM)is proposed:This method uses the kernel risk sensitive loss as the loss function of ELM.Compared with the correntropy induced loss,the kernel risk sensitive loss has faster convergence speed and stronger robustness.In addition,compared with graph regularization,hyper-graph regularization can consider the relationship between multiple sample points,which enhances the ability of the ELM to learn high-order structural information.To explore the data mining capabilities of the new method,KRSL-HRELM is extended to the field of semi-supervised learning,and propose a semi-supervised hypergraph regularized robust Extreme Learning Machine method(SS-KRSL-HRELM).The two methods have been applied to the supervised learning and the semi-supervised learning tasks,respectively.The experimental results prove that the classification ability of our method is significantly better than other methods.(4)Based on the kernel risk-sensitive mean p-power error,a robust Extreme Learning Machine(KRPELM)is proposed:By introducing the mean p-power error(MPE)into the kernel risk-sensitive loss,the original kernel risk-sensitive loss can be extended to a more general form called the kernel risk-sensitive mean p-power error.The introduction of KRP can not only improve the robustness of the ELM,but also enhance the ability of ELM to learn higher-order representation information in the datasets.Furthermore,the results of robustness experiments can prove that the robustness of KRPELM is better than some existing methods.By analyzing the final experimental results,it is clear that the four methods proposed in this paper can effectively enhance the robustness of the Extreme Learning Machine.And compared with some existing methods,the proposed methods have better classification effect and generalization ability.
Keywords/Search Tags:Extreme Learning Machine, Robustness analysis, Supervised learning, Manifold learning, Machine learning
PDF Full Text Request
Related items