Deep learning has reached an unprecedented heights both in theoretical research and practical application.Many problems can be solved by deep learning in real life.However,most of the deep learning network structure is complex,and often can not solve the problem effectively in the expected time.The Extreme Learning Machine(ELM)has the advantages of fast learning ability,strong expansion and generalization performance,and is not easy to fall into local optimality.It attracts scholars from all over the world to promote the development of their own respective fields.However,the single hidden layer network structure of the ELM greatly limits the ability of its feature mapping,and it is often unable to solve the High dimensional and large datasets problems.Based on the above two ideas,scholars have put forward many models that complement the advantages of ELM and Deep learning in recent years,such as Tow-hidden-layer Extreme Learning Machine(TELM),Deep Extreme Learning Machine(DELM),etc.These networks extract high-level and abstract feature representation of original data through multi-layer network structure,and adjust the network parameters based on the idea of the ELM.On the basis of fast learning,it can also extract the high-level abstract features of the input data.This paper focuses on the shortcomings of the shallow structure of the ELM,and makes some improvements in the number of layer of network and the calculation method of the weight of network.The specific work is as follows.1.This paper improves the TELM proposed by the predecessors on the network structure,and reduces the calculation times of the Moore-Penrose generalized inverse matrix at the same time eliminates the equality constraint of the nodes of two hidden layers.No matter in learning speed,test accuracy and network flexibility,the Improved Double hidden Extreme Learning Machine(DELM)has a certain degree of improvement.2.A new multilayer network structure is proposed based on hierarchical thinking of Hierarchical Extreme Learning Machine(H-ELM)——Hybrid Hierarchical Extreme Learning Machine(HH-ELM).The feature extraction part of HH-ELM uses ELM-Based Auto-Encoder(ELM-AE),and the Singular Value Decomposition(SVD)is introduced to express the feature better.The classification part adopts the ITELM mentioned above.In addition,sparse feature representation of original data is obtained through sparse activation of Relu activation function.3.This paper presents a parametric optimization model of H-ELM,which is called Regularization Hybrid Hierarchical Extreme Learning Machine(RHH-ELM).In the feature extraction part of HH-ELM,RHH-ELM not only optimizes the weights with L2 norm,but also adds L1 norm constraint conditions.Therefore,a network structure that can effectively reduce the appearance of over-fitting phenomenon and obtain network sparsity.In order to verify the effectiveness of the algorithm,14 different data sets were selected for different algorithms.To compare two kinds of double hidden layer networks,this paper selects more than 10 datasets with different categories and different dimensions as contrast experiments.In order to compare the feature extraction and classification ability of HH-ELM and H-ELM,three representative image datasets are selected.In order to prove that RHH-ELM can make up for the defects of HH-ELM in testing small datasets,we choose some small datasets from above datasets to do experiments.The relevant experimental results prove the rationality of the algorithm proposed by this paper. |