Font Size: a A A

Research On Transfer Subspace Learning Via Sparse Coding

Posted on:2018-02-07Degree:MasterType:Thesis
Country:ChinaCandidate:L ZhuFull Text:PDF
GTID:2428330623450736Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Traditional machine learning methods favor the assumption that both the source and target input distributions are identical.However,the source and target data in practi-cal applications might be from different distributions.When using traditional methods,source data need to be re-collected every time the distribution of the target varies.To avoid wasting data,transfer learning is developed to allow the difference between the source and target distributions.This paper focuses on the most challenging unsupervised domain adaptation where no labels are available in target domain.A general solution for unsupervised domain adaptation is to narrow the gap of the distributions across domains with the hope of enhancing the generalization capability of classifier.Sparse coding and subspace learning are popular techniques for dimensionality reduction,and have become significant candidates in domain adaptation.However,there are still some shortcomings of these methods based on sparse coding and subspace learning.The main work of this paper is as follows:1.A discriminative domain adaptation method?DDA?based on supervised sparse coding is proposed.Traditional sparse coding is unsupervised,while unsupervised do-main adaptation allows source domain to have labels.Most existing approaches explore the domain-invariant features shared by domains but neglect the valuable discriminative information of source domain.To address this issue,the proposed DDA model reduces domain shift by seeking a common discriminative subspace jointly using supervised group sparse coding?SGSC?and discriminative regularization term.Particularly,DDA adapts SGSC to yield discriminative coefficients of target data and further unites with discrimi-native regularization term to induce a discriminative common subspace across domains.Experiments show that both strategies can boost the knowledge transferability of source domain to target domain,reduce domain shifts and improve effectiveness of learning tasks.2.A joint graph embedding discriminative domain adaptation method?JGDDA?is proposed.Graph embedding,as a common framework to solve the problem of dimension-ality reduction,finds the projection subspaces through Graph-Laplacian regularization to preserve the intrinsic geometrical properties of data distributions.Therefore,following the first model,the graph embedding technique is added to get a more effective projec-tion subspace.At the same time,this model joints?2,1group sparse constraint of kernel subspace projection,which causes row sparsity of the projection matrix and can better reweight the original instances.This instance reweighting regularizer lowers down the weights of irrelevant instances,and produces a more robust subspace.And the equivalent solution to Graph Laplacian regularization term is embedded in the objective function so that no eigen-value decomposition is needed during iterations.The objective optimazi-tion problem is then solved through Augmented Lagrangian Method?ALM?.Experiments show the effectiveness of this model,and prove that both graph embedding and instance reweighting techniques can effectively enhance the knowledge transferability.
Keywords/Search Tags:Transfer Learning, Domain Adaptation, Sparse Coding, Subspace Learning, Mainfold Learning
PDF Full Text Request
Related items