Font Size: a A A

Research On ELM For Image Classification And Robust Regression

Posted on:2015-04-10Degree:MasterType:Thesis
Country:ChinaCandidate:K ZhangFull Text:PDF
GTID:2298330431489066Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Extreme Learning Machine (ELM), as a fast machine learn-ing algorithm, was originally proposed for the single-hidden layer feed-forward neural networks (SLFNs), it has attracted significant attention dueto the high learning efficiency, conceptual simplicity, and the good gen-eralization capability. With the development of the ELM, theoretical andempirical studies have shown that ELM can be extended to the general-ized single-hidden layer feedforward networks, including support vectormachine, polynomial network, RBF networks, and the conventional (bothsingle-hidden-layer and multi-hidden-layer) feedforward neural network-s. Different from the tenet in neural networks that all the hidden nodes inSLFNs need to be adjusted, ELM learning theory shows that the hiddennodes of generalized feedforward networks do not need to be adjusted andthese hidden nodes can be randomly generated. All the hidden node pa-rameters are independent from the target functions or the training data sets.Because of the random hidden nodes in ELM and therefore the analytical-ly determined output weights, it is possible for ELM to achieve extremelyfast learning speed. In particular, it is widely recognized that ELM canbe directly applied in regression and multi-class classification application-s with satisfying results. As a result, ELM is now a promising basis forthe solution of ambitious learning tasks, however, it still exists some draw-backs. First, in the applications of image classification, ELM has poorclassification ability to handle noisy images. Second, in the application-s of regression, training on data sets containing outliers tends to give riseto unreliable ELM model. Third, ELM network is prone to lose sparsitybecause of the inherently unreliable character of random hidden nodes. Inview of the above-mentioned three drawbacks, the main research contents of this thesis are outlined as follows:(1) Through the fusion of ELM and sparse representation based classi-fier (SRC), a hybrid classifier is designed for fast and accurate image clas-sification. The proposed classifier fuses the feature insensitivity of SRCand the rapid classification ability of ELM. The key point is that a misclas-sified image estimation criteria and an adaptive dictionary dimensionalityreduction method are proposed, and thereby, on the one hand, the misclas-sifiedimagescanbeclassifiedbytherobustSRCclassifierand,ontheotherhand, computation time of SRC can be further reduced. Experimental re-sults demonstrate that the proposed classifier not only outperforms ELM inclassification accuracy but also has much less computational complexitythan SRC.(2) In order to reduce the negative effort of outliers, two outlier-robustELM algorithms with1-norm loss function are proposed on the basis ofregularized ELM and weighted regularized ELM. Specially, the fast andaccurate augmented Lagrangian multiplier method is applied to guaranteethe effectiveness and efficiency. According to the experiments, the twoalgorithms not only maintains the advantages from original ELM, but alsoshows notable and stable accuracy in handling data with outliers.(3) As for the inherent loss of sparsity in ELM network, a sparse ELMnetwork construction method based on2-norm regularization is proposed.Experimental results show that the method can construct a more compactnetwork to achieve good generalization performance.
Keywords/Search Tags:Extreme Learning Machine, Sparse Representation, Image Classification, Robust Regression
PDF Full Text Request
Related items