In this paper,we mainly study a new machine learning method called extreme learning ma-chine(ELM).As a learning algorithm for single hidden layer feedforward neural networks(SLFNs),ELM has fast learning speed and good generalization ability.Hidden layer node plays an impor-tant role in ELM algorithm.There are two methods determining the nodes of hidden layer for ELM:one is the pruning method;another is the incremental learning method.We introduce two pruning methods,optimally pruned extreme learning machine(OP-ELM)and Tikhonov regularized OP-ELM(TROP-ELM).Incremental learning method is to initialize a small one and then add new nodes to the existing network.When new hidden layer nodes are added to the existing networks,retraining the network is always time-consuming,Error Minimized ELM(EM-ELM)is a method that can incrementally calculate output weights.However,due to some reasons such as overfitting,EM-ELM can not al-ways get good generalization ability.In this paper,based on the regularization method,we propose an improved version of EM-ELM called incremental regularized extreme learning machine(IR-ELM).When we add new hidden layer nodes to the network one by one,IR-ELM can update out-put weights in a very fast way with better generalization ability than EM-ELM.At the same time,we propose the enhanced version of IR-ELM(EIR-ELM),which can select a better one in a set of candidate hidden layer nodes into the network,further enhancing the generalization performance of ELM and generating more compact network.For classification and regression problems,we run experiments for IR-ELM/EIR-ELM on the benchmark dataset,along with the original ELM,OP-ELM/TROP-ELM,and EM-ELM/EEM-ELM for comparision,showing the effectiveness of IR-ELM and EIR-ELM. |