Font Size: a A A

Domain Adaptation Research Of Model On Parameter Of Adaptive Transfer

Posted on:2020-11-24Degree:MasterType:Thesis
Country:ChinaCandidate:H H YuFull Text:PDF
GTID:2428330590972655Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid growth of information,more and more new fields are emerging.In these areas,the available data is generally unlabeled or limited.For the problem of how to train high-precision prediction models based on small sample or unlabeled samples,we can transfer the knowledge of the source domain to the target domain,but the distribution between domains is usually inconsistent.In order to solve this problem,domain adaptation is proposed and become one of the important research topics in machine learning.Therefore,this paper proposes a method based on parameter transfer for two new domain adaptation scenarios.The specific scenarios and methods are as follows:?1?Since in the practical scenario,the label for obtaining a large amount of data requires much time and effort.For the transfer scenario of using the knowledge from the source domain with a large number of unlabeled sample to the target domain where all samples are unlabeled,i.e.,whole unsupervised domain adaptation?WUDA?,in this paper,a method based on model parameter dictionary sparse representation is proposed by soft large margin clustering..The method performs dictionary learning adaptively between model parameters of the source domain and the target domain to implement knowledge transfer,a l2,1,1 norm regularization term is introduced to constrain the coefficient matrix of dictionary,which makes the domain weight adaptively selectable from the common dictionary,thereby achieving domain adaptation learning.?2?In practical problems,different scenarios will focus on different problems.Therefore,this paper proposes a new unsupervised domain adaptation scenario for multiple unlabeled target domains,i.e.,multi-target unsupervised domain adaptation?1SmT?.The scenario breaks the assumption that the existing domain adaptation needs to be consistent for the label domain,i.e.,generalized domain adaptation.In order to solve the 1SmT problem,this paper proposes a model parameter adaptive transfer framework?PA-1SmT?,which not only realizes the UDA from the source domain to the target domain,but also realizes the WUDA among the target domains,the core idea is to build a target common parameter dictionary by combining the source domain and the target domain,and then use the dictionary to sparsely represent each target domain,thereby realizing inter-domain knowledge transfer.Since these methods transfer the model parameters rather than the data itself,it can be directly used for privacy protection of domain adaptation.Finally,the significant effectiveness of the proposed method is verified by related experiments.
Keywords/Search Tags:Model parameter transfer adaptively, WUDA, 1SmT, Model parameter dictionary learning, Sparse representation
PDF Full Text Request
Related items