Font Size: a A A

The Theory And Application Research Of Deep Autoencoder

Posted on:2019-01-13Degree:MasterType:Thesis
Country:ChinaCandidate:F LiuFull Text:PDF
GTID:2428330548476165Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
The deep autoencoder learns the features by reconstructing the original input.The pre-training and fine-tuning solve the training problem of deep network,and it has a good generalization ability.Therefore,it has been widely applied to many fields,including image processing,natural language processing,etc.As a typical unsupervised learning method,the deep autoencoder is often used as pre-training models to find a good initial value for the training of multilayer perceptrons.In recent years,deep autoencoder algorithms have been extensively studied,and have gradually become a research hotspot in the field of pattern recognition.As a visual basis for human perception of external things,images are an important resource for humans to obtain information from the outside world.Therefore,it is very important to use deep autoencoders to automatically perform image recognition and classification.The most important thing for image classification is feature extraction.Since deep autoencoders can extract features hierarchically,this dissertation mainly studies the application of deep autoencoders in image classification.By improving the algorithm of the autoencoder,the hierarchical features obtained in the pre-training process effectively improve the classification ability.The main work is summarized as follows:(1)Aiming at the problem of local validity of Gaussian kernel function in the smooth autoencoder,a hybrid kernel smoothing autoencoder is proposed.The algorithm calculates the weights of neighbors using a hybrid kernel,which increases the reliability of weights and improves the classification performance of the algorithm.The algorithm also compares the effects of experiment results by using different forms of hybrid kernel.The results show that the hybrid kernel can effectively improve the classification performance of the algorithm.(2)Since the unsupervised sparse autoencoder with nonnegativity constraints has the problem of insufficient expression of features,a supervised nonnegativity constrained sparse autoencoder is proposed.The algorithm adds a classification prediction error based on the sparse autoencoder with nonnegativity constraints.While keeping the nonnegativity constraints of connection weights,it uses discrimination information of training samples to improve the classification ability of the model.In addition,based on the proposed model,the influence of nonnegativity constraints on the classification ability of the model is studied,and another nonnegativity constraint condition is proposed.The two nonnegativity constraints are compared by numerical experiment.Good experimental results verify the effectiveness of the proposed algorithm.(3)For traditional autoencoders,model training is performed by calculating the error between the original input and its corresponding reconstruction,without considering the correlation problem between the reconstructed samples.A neighborhood preserving autoencoder is proposed.On the basis of the autoencoder,it seeks the neighbors of the reconstruction results and carries out a weighted average to improve the discrimination ability of the features.In addition,in order to better understand the discriminability of the model intuitively,the hidden layer output values are visualized.Experimental results show that the proposed algorithm can effectively improve the classification ability of the model.
Keywords/Search Tags:Deep learning, deep autoencoder, greedy layer-wise pre-training, fine-tuning, image classification
PDF Full Text Request
Related items