Font Size: a A A

Towards Few-shot Image Recognition Via Episodic-training Mechanism

Posted on:2022-11-12Degree:MasterType:Thesis
Country:ChinaCandidate:X C LiuFull Text:PDF
GTID:2518306776492614Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
The success of deep learning stems from a large amount of labeled data,while humans have a good cognitive recognition ability by only seeing a few number of samples.The gap between the two facts brings great attention to the research of Few-Shot Learning.Compared with the traditional deep learning scenario,Few-Shot Learning is not to classify unseen samples,but fast adapts the meta-knowledge to a new task,where only very limited labeled data and knowledge gained from previous experience are given.Recently,significant advantages have been made to tackle this problem by using the idea of meta-learning coupled with episodic-training.The episodic sampling strategy is to randomly construct meta-learning tasks from the dataset to optimize the model.The intu-ition is to use a episodic sampling strategy,a promising trend to transfer knowledge from known categories(i.e.,seen categories with sufficient training examples)to new categories(i.e.novel categories with few examples),simulating the human learning process.Although the use of episodic-training is very effective,they all ignore two key is-sues:(1)how can the knowledge learned in the past be useful for new tasks when training one episode after another.(2)The Few-Shot Learning method based on episodic-training ignores the importance of intra class diversity and excessively seeks to realize the dis-crimination between classes,resulting in the inability to transfer the learned features to new tasks.To solve these problems,Towards Few-Shot Image Recognition via Episodic-training Mechanism is studied.Based on this,the main contents and contributions of this paper are as follows:(1)Aiming at the first problem,this paper proposes a Few-Shot Learning method based on Memory-Augmented,which can establish the relationship between episodic tasks and make full use of the knowledge learned in the past.Different from the exist-ing meta-learning methods,this paper introduces the Memory-Augmented dynamic memory module,and establishes a meta knowledge memory-bank to store the rep- resentative feature of each class in the training process.In order to make full use of meta knowledge,this paper also introduces a graph augmentation module,which aggregates the knowledge related to the current task in the memory through meta knowledge mining,and then sends the aggregated meta knowledge and the sam-ple characteristics of the current task into the graph neural network with adaptive weighting to mine the relationship between them,so as to realize the fast and com-prehensive reasoning of new tasks.(2)Aiming at the problem that the memory bank does not have back propagation,but the encoder used to extract features continuously updates parameters through back propagation,resulting in the inconsistency of the same sample in different episodic tasks,this paper proposes a Few-Shot Learning method based on purified memory,which can ensure the stability and consistency of the learned knowledge.This paper studies the best prototype representation of each category from the perspective of information bottleneck.By gradually refining information from semantic label,the stored knowledge has universal expressiveness,consistency and stability.(3)Aiming at the second problem,this paper proposes a two-stage Few-Shot Learn-ing method based on self-supervised and knowledge distillation,which can learn that the representation of data can correctly distinguish between different categories while the common factors of data change are unchanged.In the self-supervised pre-training stage,the common factors of output prediction and input sample change are unchanged.In addition,considering the complementary advantages of pre-training and episodic-training mechanisms,after pre-training,we randomly con-struct meta learning tasks,and use knowledge distillation to ensure that the learned features have inter-class discrimination,intra-class diversity and generalization of task changes,so as to better transfer to new tasks.The experimental results on four Few-Shot datasets(mini Image Net,tiered Image Net,CUB-200-20 and CIFAR-FS)show that the proposed algorithm can achieve higher classi-fication accuracy on all datasets compared with benchmark algorithms in this field and the best-performing algorithms in recent years.Thus,the effectiveness and superiority of the algorithm in this paper are further demonstrated.In addition,the proposed main method has been published by IJCAI2021.
Keywords/Search Tags:Few-Shot Learning, episodic-training, meta-learning, self-supervised, knowledge distillation
PDF Full Text Request
Related items