In the era of big data,sufficient training data has brought huge performance improvements to machine learning.A large amount of data means a large amount of manual annotation.However,manual annotation is often time-consuming and laborintensive,which has led to the development of transfer learning.Transfer learning aims to assist label-scarce target domain learning with the knowledge of relevant source domains.Domain Adaptation is an important research direction in transfer learning.Domain adaptation usually uses a distance-based or adversarial approach to reduce the discrepancy between domains for domain-invariant features.For learning scenarios where the label spaces between domains are not completely consistent,existing work usually uses the threshold method to identify shared classes,then aligns distributions of shared classes across domains.Although domain adaptation has been widely used in many real scenarios,it still has shortcomings and needs to be improved: 1)The existing domain adaptation learning methods usually focus on only minimization of distribution discrepancy between domains,while pays little attention to intrinsic structural knowledge in target domain.2)The existing methods are only applicable to the case where source and target domains have shared label sets,while in real tasks,label spaces across domains are completely different.In view of the above two points,this paper proposes the following solutions:For the first point,this paper proposes Adversarial Domain Adaptation with Target Structural Knowledge.First,initial features of the source and target domains are generated by self-supervised learning.Then,the structural information of the target domain is introduced into Maximum Classifier Discrepancy(MCD)method to improve the learning performance of the target domain.Experiment shows that compared with other closed-set domain adaptation methods,the proposed method can improve the target performance and convergence speed at the same time.For the second point,this paper proposes Unsupervised New-set Domain Adaptation with Self-supervised Knowledge.First,we use self-supervised learning for pre-training.Contrastive knowledge is transferred to target domain to generate discriminative features.Then,classification loss based on self-supervision is used for target classification when label sets of across domains are completely different.Experiment shows that the proposed method has significant performance improvements compared to previous methods,including UDA,clustering,and new-class discovery methods.In addition,this method is also applicable to cases that source and target domain have shared label sets. |