When there is a sufficient amount of tag data,deep learning systems have achieved excellent performance.However,it is usually not easy to obtain enough training samples,so researchers began to think about how to use a small number of marker samples to achieve better training effects,and few-shot learning emerged.Meta-learning,as a framework,is applied to few-shot learning using a large number of similar tasks to learn how to adapt to new tasks with only a few labeled samples.However,meta-learning uses shallow network as feature extractor to avoid over-fitting model,which will lead to insufficient feature extraction.In addition,the existing of small sample classification method assumes that each task and study fixed number of instances,even if the difference in each task and the number of categories,yuan learning is still in all the tasks equally using yuan to learn knowledge,and they did not consider not seen task distribution difference,study on the training set to metaknowledge can be of much use,These constitute the problem of classification imbalance,task imbalance and distribution imbalance in small sample learning.As a representative of meta-learning methods,model-Agnostic meta-learning(MAML)aims to find an optimal initialization parameter by training multiple tasks.When encountering new tasks,the Model only needs one or several gradient descent to achieve good generalization performance.In this thesis,a meta-transfer task-adaptive meta-learning(MT-TAML)method based on meta-transfer learning is proposed to improve the MAML model.Mt-TAML uses the meta-transfer learning method to learn and transfer the weight parameters of a pre-trained deep neural network to make up for the deficiency of using shallow network as feature extractor.The role of learnable parameter balance meta-knowledge in each task is added to solve the imbalance problem in realistic small sample learning scenarios.This paper also introduces a difficult task training strategy,that is,the class with the lowest accuracy in each task is selected as the difficult class,re-sampling from the difficult class to form the difficult task,and using the difficult task to train the model to improve the accuracy of the model.The experimental results show that the accuracy of MT-TAML model is improved by2%-4% compared with the existing small sample learning method,and the ablation experiment further verifies the effectiveness of the combination of meta-transfer learning and new equilibrium parameters. |