| Few-shot learning is a machine learning problem under the condition of limited data,which has received extensive attention from researchers in recent years.Unlike common deep learning training methods,few-shot learning attempts to enable models to extract effective information from a small amount of samples and learn the ability to learn new concepts.Few-shot learning enables deep learning models to summarize from a small number of samples,just like humans,and quickly learn new knowledge.Few-shot learning has significant implications for learning rare categories and reducing computational resources.Thesis investigates the problem of few-shot incremental unknown class detection,few-shot classification,and few-shot class incremental learning.These correspond to how to relax the restriction in classic few-shot learning,where the test samples must be unknown classes; how to solve traditional few-shot classification problems; and how to maintain memory of existing knowledge in few-shot learning.These three problems are unified in a more generally applicable few-shot learning scenario: for a test example,it should first be determined whether it is a known class,and if it is,we perform the classification task; if it is an unknown class,we perform few-shot class incremental learning and update the classification model and unknown class detection model.The summary of the work in thesis is as follows:1.Thesis investigates the problem of few-shot incremental unknown class detection and proposes an incremental few-shot unknown class detection algorithm based on class center maintenance,which relaxes the condition constraint in the few-shot problem setting that the test sample must be of unknown class.The algorithm combines distance-based rejection strategy and graph-based rejection strategy to comprehensively judge whether a sample belongs to a known class.During the incremental learning process,the algorithm constantly utilizes a small amount of labeled data to expand the range of known classes and maintain the ability to detect unknown classes.Compared with the benchmark algorithm(an open-set recognition method extended to the few-shot scenario),the proposed algorithm achieves better performance on both flow and image datasets.2.Thesis investigates the classical few-shot classification problem and proposes a graph-based information propagation algorithm that solves the problem of insufficient information propagation in instructive few-shot learning and significantly improves the model’s classification performance.The algorithm embeds the fewshot task data into a graph structure and performs iterative edge and node updates for information propagation.Experimental results show that the proposed method outperforms other methods on three benchmark datasets for few-shot classification.Instructive learning is sensitive to the test data distribution,and in classical problem settings,test samples are uniformly distributed.To study the performance variation of instructive few-shot learning under different test data distributions,thesis discusses the effect of imbalanced distribution on instructive few-shot classification algorithms and designs a distribution detection strategy.Experimental results demonstrate that using the distribution detection strategy,the performance fluctuation of the proposed algorithm on few-shot tasks with different distributions is significantly reduced.3.Thesis proposes a feature transformation method to address the problem of feature degradation in few-shot class increment learning,where the number of samples of novel classes is limited,making it difficult to adjust model parameters and resulting in poor representation ability of features of novel class sample.The proposed model learns a linear mapping using known class data before classification,and then maps the class center and test sample to a new feature space to improve feature distribution.Experiments demonstrate that the proposed feature transformation method has good accuracy in the incremental learning process and low performance degradation on three benchmark datasets.Further experiments validate that the proposed feature transformation method can make the feature representation closer to the theoretical optimal value. |