| In recent years,artificial intelligence technology represented by deep learning has developed rapidly,and a large number of advanced algorithms have been proposed one after another and achieved great success in many fields.Nevertheless,existing deep learning algorithms cannot get rid of their dependence on large-scale training data,which greatly limits their application scenarios.In contrast,human beings can not only learn quickly from a small number of samples,but also demonstrate extrapolative and integrative capabilities.In order to overcome the dependence of existing deep learning algorithms on extensive training data and to bridge the gap between current artificial intelligence and genuine human intelligence,there has been extensive scholarly and industrial attention focused on few-shot learning.For an given N-way K-shot task(i.e.,the task contains N classes with only K labeled samples per class; usually,K<10),it is difficult for a deep learning model to learn generalizable task knowledge by relying only on the given N×K training samples—Utilizing a small number of support set samples to train or finetune deep learning models with a huge number of parameters can easily lead to overfitting problems.In order to solve the dilemma of deep learning algorithms in few-shot scenarios,this dissertation carries out research on Few-shot Learning Based on Task Adaptation from the following two dimensions.i)Knowledge Adaptation: learning meta-knowledge that can be generalized across tasks using external auxiliary datasets,which can be readily used to construct the classifier of a target task based on the learned meta-knowledge and the support set of the target task(no parameter updating).ii)Model Adaptation: adapting the pretrained model to the target few-shot task by adding a task adaptation module consisting of a small number of learnable parameters to the pretrained model and finetuning the added module with the support set of the target task.The four key research contents and innovative contributions of this dissertation around the above two dimensions are summarized as follows.1)Fast Task Adaptation Based on Progressive Meta-learning.Meta-learningbased few-shot learning approaches learn cross-task generalizable meta-knowledge by constructing a large number of meta-training tasks(or mimic tasks)from auxiliary datasets,thus enabling the fast adaptation of the target few-shot task to the learned meta-knowledge based on the support set samples.Despite the advantages,such approaches ignore the hardness and quality of each meta-training task when constructing meta-training tasks from the auxiliary dataset,which is not conducive to progressively improving the crosstask generalization performance of the learned meta-knowledge.Inspired by the cognitive process of recognizing new things of human beings—learning from easy to hard tasks,this dissertation carries out a study on Fast Task Adaptation Based on Progressive Metalearning.Specifically,a curriculum-based progressive meta-learning framework is proposed,whose core idea is to progressively divide the base classes in the auxiliary dataset into smaller class subsets through hierarchical clustering,and construct a training curriculum by sampling easy-to-hard meta-training tasks from the obtained classes subsets at different layers.By progressively increasing the cross-task generalization ability of metaknowledge on easy-to-hard meta-training tasks,the proposed framework significantly enhances the fast adaptation capability of the target task to the pretrained meta-knowledge.2)Cross-domain Task Adaptation Based on Domain-aware Meta-learning.In few-shot learning,there is not only a class-shift problem—no overlap between classes of the target few-shot task and the auxiliary dataset(or meta-training tasks),but there may also be a domain-shift problem—the target task comes from an unknown domain distribution.Once there is a domain-shift between classes of the target few-shot task and the auxiliary dataset,it is difficult to successfully generalize the pretrained meta-knowledge to that target few-shot task.In order to improve the cross-domain generalization performance of the pretrained meta-knowledge,this dissertation carries out a study on Crossdomain Task Adaptation Based on Domain-aware Meta-learning.Specifically,a joint data-and model-driven domain-aware meta-learning framework is proposed,whose core idea is to improve the cross-domain generalization of the learned meta-knowledge from both data and model perspectives.From the data perspective,meta-training tasks with diverse styles are constructed to expand the domain distribution space of meta-training tasks to cover the unknown domain distribution of the target task as much as possible.From the model perspective,discriminative representations are encouraged using domain-robust contrastive learning loss functions to improve the adaptability of meta-knowledge in face of the domain-shift problem.By improving the cross-domain generalization ability of the learned meta-knowledge from both data and model perspectives,the proposed framework can significantly enhance the cross-domain adaptation performance of the target task to the pretrained meta-knowledge.3)Generalized Task Adaptation Based on Decoupled Prompt Tuning.Multimodal pretrained models have been shown to harbor rich knowledge that can be generalized across tasks.Prompt learning strategies utilize a small number of support set samples to learn task-relevant prompts,so as to adapt the pretrained model to the target task.Despite their advantages,such approaches are prone to the problem of poor generalization of task-adapted models: pretrained models are difficult to retain good generalization on unknown new tasks after performing prompt tuning on the target task.In order to solve this dilemma,this dissertation carries out a study on Generalized Task Adaptation Based on Decoupled Prompt Tuning.Specifically,a decoupled prompt tuning framework based on feature channel debiasing is proposed,whose core idea is to effectively decouple taskspecific features and task-shared features,so as to maximize the generalization performance of the task adapted model on unknown new tasks.By decoupling task-specific and task-shared features during prompt tuning,the proposed framework effectively improves the generalization of the task adapted model.4)Robust Task Adaptation Based on Sample Dual-denoising.Adapter finetuning based few-shot learning methods utilize support set samples of the target task to finetune a task adaptation module embedded in the pretrained model,so as to adapt the pretrained model to the target task without catastrophic knowledge forgetting.Nevertheless,such methods assume that the support set samples in the target task are clean and of high quality.However,in complex few-shot learning scenarios,task-irrelevant noisy samples may exist in the support set.In this dissertation,we show for the first time that both image noise(X-Noise)—noisy image backgrounds interfer the representation of the target featuresand label noise(Y-Noise)—mislabeled images,negatively impacts the generalization of learned task adapted model.In order to address this problem,this dissertation conducts a study on Robust Task Adaptation Based on Sample Dual-denoising.Specifically,a sample dual denoising framework based on contrast relevance aggregation is proposed,whose core idea is to use the relevance between local image features among the support set samples to recognize image/label noise in the support set.By calculating the weights of the images and the local image regions in the support set to prompt the learning of taskspecific discriminative representations,the proposed framework effectively improves the noise robustness of the task adapted model. |