Font Size: a A A

Research On Deep Networks-oriented Auto-Encoders

Posted on:2017-12-29Degree:MasterType:Thesis
Country:ChinaCandidate:Y P LuFull Text:PDF
GTID:2348330488461982Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Compared with shallow networks, deep networks can be much more efficient to represent highly nonlinear and highly varying functions compactly. Deep learning algorithm endows deep networks with an excellent generalization performance via pre-training and fine-tuning. As an unsupervised feature detector model, auto-encoders are often used to achieve pre-training so as to search a satisfied initialization for deep networks. In this thesis, we focus on the deep networks-oriented research on auto-encoders, aiming to enhance the classification performance of deep networks by improving the auto-encoders and making fully use of the hierarchical features of pre-training. Details are as follows:This thesis proposes a sparse auto-encoder based on the smoothed l1 norm. The receptive field of mammalian vision system shows that inducing sparsity for auto-encoders can improve its performance. Generally, KL divergence is used to achieve sparsity for autoencoders. However, the sparse representation theory indicates that l1 norm can introduce better sparsity. To address the non-differential problem, we employ the “inf-conv” smoothing technique to obtain the smoothed l1 norm. Experimental results show that using the sparse auto-encoder based on the smoothed l1 norm instead of the KL divergence to finish the pre-training can improve the classification performance for deep networks.This thesis presents a feature ensemble learning based on the sparse auto-encoder. On the one hand, we can obtain multiple features with different abstract levels via the pretraining. On the other hand, an ensemble of multiple classifiers can effectively improve the accuracy and stability of a single classifier. Based on the two facts, we utilize the hierarchical features to obtain the multiple classifiers and the final prediction is the combination of these classifiers. Experimental results show our ensemble way is effective.
Keywords/Search Tags:Deep Networks, Auto-Encoder, Sparsity, l1 Norm, Ensemble Learning
PDF Full Text Request
Related items