Font Size: a A A

Experimental Study Of Speed Up Techniques For Sparse Autoencoder

Posted on:2015-02-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y X LuoFull Text:PDF
GTID:2268330431951108Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
The success of machine learning algorithms generally depends on data representation. So far there has been a great deal of literature on unsupervised feature learning and joint training of deep learning. There is little specific guidance, however, on combining hand-designed features or the operations on them with features which are learned from unsupervised learning. Hence, in order to help fill this gap in our knowledge, this study investigated the probable existence of features learning speedy by combining with hand-designed features or their operations in feature learning algorithm, specifically sparse auto-encoders. In this paper, using MNIST ("Modified National Institute of Standards and Technology") handwritten digit database as an example, we propose a novel method for training sparse auto-encoders. In this method, we first get some small-scale features through training, then generate more features through operations such as rotation and translation. Finally, we use the whole dataset to fine-tune the network.This approach avoids optimizing cost function for all nodes in the traditional sparse auto-encoder training process, which is very time-consuming. Simulation results show that the proposed method can speed up the training process by over50%, while keeping the recognition accuracy at the same level or even better. The present findings also contribute to the field’s understanding of sparse representation that large-scale sparse features can be generated by small-scale sparse features.
Keywords/Search Tags:representation learning, feature learning, sparse autoencoder, neuralnetwork
PDF Full Text Request
Related items