Font Size: a A A

Unsupervised Transfer Learning Algorithms Based On Maximum Mean Discrepancy And Probabilistic Graph Embedding

Posted on:2020-10-27Degree:MasterType:Thesis
Country:ChinaCandidate:P XiaoFull Text:PDF
GTID:2518305897970759Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Our world is made up of different areas.The data set collected through a certain way can be seen as a field and the data set collected from different ways can be seen as different areas.An important research problem is to transfer knowledge between different areas.The traditional machine learning goal is to minimize training data regularization empirical risk and find a minimal expected risk of test data model.The traditional machine learning,however,bases on an assumption,that is,the training data and testing data should have similar joint probability distribution,the purpose of transfer learning is to train a model from a semantically related but different source domain and it has become increasingly influential and dynamic research field.This paper reviews the recent progress of transfer learning and then proposes two new transfer learning algorithms:1)Firstly,we propose a more robust framework called Transfer Latent Representations(TLR).This framework is built on a simple linear autoencoder,which is believed to be able to maintain more common properties of both domains.Specifically,the encoder in the autoencoder aims to project the data of both domains into a latent space as in the existing domain adaptation methods.However,the decoder imposes an additional constraint,that is,the original data must be reconstructed by the projection.Moreover,we also integrate Maximum Mean Discrepancy(MMD)and Manifold Regularization(MR)into our framework,which are believed to be able to further narrow the distance between both domains.Experiments on digit and objects cross-domain recognition datasets show that TLR is more effective and robust than the state-of-art unsupervised domain adaptation methods.2)Secondly,we design a new method called Probabilistic Graph Embedding(PGE).PGE first derives probabilities that the target domain instances belong to each category,which is believed to be able to explore target domain better.We then obtain a projection matrix by constructing a within-class probabilistic graph.This projection matrix can embed both domains into a shared subspace where domain shift is largely diminished.Experiments on object recognition cross-domain datasets show that PGE is more effective and robust than the state-of-art unsupervised domain adaptation methods.
Keywords/Search Tags:Transfer Learning, Classification, Subspace, Maximum Mean Discrepancy
PDF Full Text Request
Related items