Font Size: a A A

Research On Autoencoder And Its Application

Posted on:2018-06-13Degree:MasterType:Thesis
Country:ChinaCandidate:L H MengFull Text:PDF
GTID:2348330539475141Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
A successful representation of data is directly related to data's interpretation and storage.Therefore,Data representation is crucial for machine learning tasks and the realization of artificial intelligence.Autoencoders are neural networks models proposed to achieve good representation.Moreover,based on single layer Autoencoder,Stacked Autoencoders have more powerful representation capacity.Autoencoder can learn the structure of data adaptively and represent data efficiently.These properties make autoencoder not only suit huge volume and variety of data well but also overcome expensive designing cost and poor generalization.Moreover,using autoencoder in deep learning to implement feature extraction could draw better classification accuracy.However,there exist poor robustness and overfitting problems when utilizing autoencoder.In order to extract useful features,meanwhile improve robustness and overcome overfitting,this thesis studied denoising sparse autoencoder through adding corrupting operation and sparsity constraint to traditional autoencoder.This thesis explored the effect of various sparsity constraints and corruption constraints on recognition accuracies,learned filters and reconstruction results by employing these models on hand-writing image dataset and natural image dataset.The experimental results suggest that denoising sparse autoencoder possesses better generalization and robustness,compared with traditional autoencoder,sparse autoencoder and denoising autoencoder.The learning capacity of shallow autoencoder is limited.Also,autoencoder cannot abstract hierarchical features.To efficiently represent high-dimensional data and study how to build deep structure using autoencoder,this thesis proposed a novel stacked denoising sparse autoencoder based on the denoising sparse autoencoder.Firstly,we construct denoising sparse autoencoder through introducing corrupting operation and sparsity constraint into traditional autoencoder.Then,we build stacked denoising sparse autoencoders which has multi-hidden layers by layer-wisely stacking denoising sparse autoencoders.Experiments were designed to explore the influences of corrupting operation and sparsity constraint on different datasets using the networks with various depth and hidden units.The comparative experiments reveal that test accuracy of stacked denoising sparse autoencoder is much higher than other stacked models,no matter what dataset is used or how many layers the model has.Addition to this,this thesis found that the deeper the network is,the less activated neurons in every layer will have.More importantly,this thesis shown that the strengthening of sparsity constraint is to some extent equal to the increase of corrupted level.According to the experimental results based on shallow and deep neural networks and the limitations of current autoencoder models,this thesis analyzed the disadvantages of current model,and then proposed two improving directions: constructive autoencoder and topographic autoencoder.In addition,some important questions associated to autoencoder were discussed,such as what sorts of characteristics an excellent autoencoder should have,problems that have not been answered appropriately and is there a connection between autoencoders,brain science and recognition science etc.Finally,this thesis summarized the work reported in this thesis and expected the future research of autoencoders.
Keywords/Search Tags:Autoencoder, Stacked Autoencoders, Deep Learning, Machine Learning, Computational Cognitive Neuroscience
PDF Full Text Request
Related items