Font Size: a A A

A Study Of Auto-encoder Based Semi-supervised Representation And Classification Learning

Posted on:2016-06-24Degree:MasterType:Thesis
Country:ChinaCandidate:H Y WuFull Text:PDF
GTID:2308330479984818Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
Recent years have witnessed the significant success of representation learning and deep learning in various prediction and recognition applications. Currently, in the field of Auto Encoder based representation learning, most studies adopt the two-phase procedures, namely the first step of representation learning and then the second step of supervised learning. In this process, to fit the training data the initial model weights, which inherits the good properties from the representation learning in the first step, will be changed in the second step. In other words, the second step leans better classification models at the cost of the possible deterioration of the effectiveness of representation learning.In order to fully exploit the advantages of representation and supervised learning, we propose a joint learning framework of representation and supervised learning to overcome the disadvantage that the supervised learning may injure the effectiveness of representation learning in the previous two-phase framework. It aims to learn a model, which not only guarantees the “semantics” of the original data from representation learning but also fits the training data well via supervised learning. Along this line we develop the model of Semi-Supervised Auto-Encoder under the spirit of the joint learning framework. Then, a new BP algorithm for learning the model parameters automatically are proposed in this research, which is based on the gradient decent method.In the experiment,we use the 4 public data sets from UCI Machine Learning Repository, including the Image Segmentation data(Image), the Johns Hopkins University Ionosphere database(Ionosphere), the Isolated Letter Speech Recognition data(Isolet) and the Letter Image Recognition Data(Lird). Two baseline methods are selected for our Semi-Supervised Auto-Encoder method(SSA for short) to be compared with, including the Logistic Regression(LR for short) and the Disjoint Learning Model(DLM for short). Then we compare the performance, the number of hidden neurons and Iteration times and training data proportion in data sets for classification. And finally, the experimental results fully validate the significant effectiveness of the proposed model.
Keywords/Search Tags:Auto-Encoder, Representation Learning, Semi-supervised Learning, Classification Learning
PDF Full Text Request
Related items