Font Size: a A A

Researches And Applications Of Transfer Learning In Label-scarce Scenarios

Posted on:2022-12-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:A MaFull Text:PDF
GTID:1488306764459074Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of computing power and artificial intelligence,humans can utilize massive data to train a model.However,to effectively train a model,machines need accurate labels since unlabeled data is similar to noise which cannot be understood.Meanwhile,the probability distribution of available labeled data may be different from the distribution of unlabeled data,despite the fact that they may look alike.Under such circumstances,traditional supervised algorithms which assumes both training and testing data are i.i.d.may fail to handle distribution divergence.If we ignore the distribution differences and directly applying the model learned from the source to the target,we can hardly get good results.Hence,how to effectively utilize labeled data and reduce distribution divergence in a label-scarce environment becomes a difficult challenge in the community.To tackle this challenge,this dissertation proposes transfer learning methods for label-scarce situations.This dissertation's main concentration focuses on image classification where training samples with different distributions are utilized to predict the labels of test samples.For test samples,labels are unknown or partly available,therefore the proposed classification problem is label-scarce.Under the circumstance that labeled data are missing,this dissertation facilitates auxiliary data(with different sample space)which share the same semantic space with test data to assist the optimization of learning model.Denote training data as source domain while test data as target domain,the previous label-scarce classification problem is transformed to a task which utilizing source domain to learn a optimal model for predicting the labels of target domain.However,the discrepancy between source and target domain may have a negative effect for label prediction.To solve this problem,this dissertation proposes both shallow and deep methods to reduce domain discrepancy.In shallow methods,this dissertation proposes to tackle two challenges in image classification tasks: one is to learn common factors between source domain and target domain in a subspace,the other is to reduce the side effect of negative transfer.Compared with mainstream methods,our proposal contribute on both subspace learning and label prediction.Mainstream methods utilize first-order statistics as regularizers to learn projection matrix,while our work Simultaneously facilitate moment matching and graph embedding to learn subspace.Specifically,graph embedding designs corresponding rules to preserve the relationship of samples from the perspective of manifold learning to reduce negative transfer.For label prediction,compared with mainstream methods with KNN classifier,our work creatively predict labels in both high-dimensional and low-dimensional space.In deep methods,this dissertation proposes to deal with two challenges in the task of image classification: one is how to learn domain-invariant features,the other is how to fully train feature learning network and discriminator.Compared with mainstream methods,our work employs the principle of minimum and maximum entropy to learn domaininvariant features in an adversarial framework.To our knowledge,our mini-max entropy strategy is the first to achieve domain confusion.In summary,the contributions of this dissertation are listed as follows:First,this dissertation reports a shallow transfer learning method in label-scarce situations.Specifically,we utilize graph embedding and moment matching to align source and target domains in the learned subspace.Meanwhile,we resort to K-means clustering and structural risk minimization to update pseudo labels.Our work performs well on both single-source and multi-source scenarios.Second,this dissertation proposes a deep transfer learning method in label-scarce situations.Conditional probability,adversarial learning are combined with Res Net to form a deep adaptation framework with entropy optimization.Compared with other methods,our work presents two adversarial parts,i.e.,generator and discriminator,entropy minimization and maximization.Our work is compatible with different kinds of backbone networks and is easy to calculate.Third,this dissertation discusses the application of transfer learning to visual tracking.We transform the problem of visual tracking to a modality-transfer task.In our work,the historical frames are treated as source domain and the future frames are as target domain.The size of candidate set are limited so that the accuracy and speed of algorithm are ensured.Overall,this dissertation proposes both shallow and deep methods in label-scarce situations and discusses the applications of transfer learning to visual tracking.Experiments demonstrates the effectiveness of our proposal.
Keywords/Search Tags:subspace learning, graph embedding, transfer learning, domain adaptation, adversarial learning, visual tracking, label-scarce classification
PDF Full Text Request
Related items