| Deeping learning has recently attracted the attention of more and more researchers due to its outstanding performances in computer vision,NLP and reinforcement learning.When the annotated data are scarce,traditional deep learning methods generally perform unsatisfactorily.In order to solve the problem that the traditional deep learning methods is incompetent when the number of samples in actual situations,few-shot learning(FSL),especially few-shot classification(FSC)was proposed.As for the fine-tuning stage,the current methods always choose the hyper-parameters relying on experience.Since there are no validation and test images in the target dataset,it is impossible to evaluate the performance of the fine-tuned model.In addition,the classifier parameters will also be quickly converged to a nonoptimal solution under fewshot conditions,which further reduces the classification performance.This thesis studies the above-mentioned problems,and its main works are as follow:1.Few-Shot Line Discriminant Analysis(FSLDA)based on metric learning is proposed.FSLDA constructs the optimal linear classifier by fully excavating the professional knowledge of the target dataset,which provides a parameters initialization method for the classification layer of the model.Compared with the randomly initialized classification layer,the initialization method using FSLDA can not only speed up the convergence,but also facilitate the convergence to a more stable space,and can provide the last fully connected layer a better starting point that fine-tuning with backpropagation probably cannot reach,thus guaranteeing the lower bound of the model accuracy.Ablation studies on Mini-Image Net dataset show that the Meta-Baseline method with the FSLDA alone has an average performance improvement of 3.07% and 2.99% under the fine-tuning layer policy “Last1”and “All”,respectively.2.Based on the experience of meta-learning-based pretraining methods,this thesis proposes an Adaptive Fine-tuning(AFT)algorithm,which executes adaptive epoch learning using the validation classes of the base dataset by designing an adaptive fine-tuning termination rule.In addition,based on AFT,the hybrid fine-tuning strategy under different samples sizes and different fine-tuning layer strategies is proposed by analyzing the model performance under different fine-tuning strategies.The hybrid fine-tuning strategy can not only adaptively decide whether fine-tuning is needed(i.e.,whether the performance of the fine-tuned model is better than FSLDA model),but also prevent the model from underfitting or overfitting,thus improving the efficiency and accuracy of the algorithm.Ablation results on mini-Image Net dataset show that the Meta-Baseline method with AFT under the fine-tuning layer policy “All”further brings 0.40%,0.99%,and 0.79% performance improvements for sample sizes of 10-shot,20-shot,and 30-shot,respectively.3.The acquired hybrid fine-tuning strategy is evaluated under the pretraining methods R2D2,SKD-GEN0 and RFS-simple.Comparative experiments show that the proposed hybrid fine-tuning strategy has an average performance improvement of 2.30%on the Mini-Image Net dataset and 2.78% on the Tiered-Image Net dataset over current experience-based finetuning methods. |