Font Size: a A A

A Study For The Enhancement And Extension Of Extreme Learning Machine

Posted on:2016-12-13Degree:MasterType:Thesis
Country:ChinaCandidate:D LiuFull Text:PDF
GTID:2348330536487050Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Among various kinds of neural networks,single-hidden-layer feed-forward networks(SLFNs)are extensively investigated in both theoretical and applied fields due to their simplicity and approximation capability.However,the traditional SLFNs learning algorithms,e.g.the gradient-based algorithms,may face some challenging problems such as slow convergence and local minima.Different from the conventional neural network theories,a new learning scheme was proposed which is referred to as extreme learning machine(ELM).Compared with the traditional techniques,ELM achieves higher learning speed and better generalization performance and can overcome some of these challenges.ELM has gained increasing attention recently and different variants were developed to solve different problems,such as online learning,learning to rank,semi-supervised and unsupervised learning,etc.And ELM was also directly applied in a wide range of real applications,such as biomedical application,computer vision,system modeling and prediction,etc.The study in this article can be divided into two parts.In the first part,we studied the improvement of incremental ELM and proposed a new incremental ELM which we call length-changeable incremental extreme learning machine(LCI-ELM).LCI-ELM allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning.The output weights of newly added hidden nodes are determined using a partial error-minimizing method.We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set.Then we compared LCI-ELM with I-ELM,CI-ELM and EI-ELM on some datasets and the experimental results demonstrate that LCI-ELM can obviously improve the convergence rate.In the second part,we used ELM to cope with concept drift problems and a forgetting parameters extreme learning machine(FP-ELM)is proposed.In the training phase,FP-ELM learns data one-by-one or chunk-by-chunk and assigns a forgetting parameter to the previous training data according to the current performance to adapt to possible changes after a new chunk comes.Then performance comparisons between FP-ELM and two frequently used ensemble approaches are carried out on several regression and classification problems with concept drift.The experimental results show that FP-ELM produces comparable or better performance with lower training time.
Keywords/Search Tags:Extreme learning machine, Universal approximation, Concept drift, Regularized optimization methods
PDF Full Text Request
Related items