Font Size: a A A

A Study On Few-shot Learning Of Non-independent Identically Distributed Data

Posted on:2022-07-12Degree:MasterType:Thesis
Country:ChinaCandidate:L C DaiFull Text:PDF
GTID:2518306320954339Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Machine learning has made more and more achievements in image processing,natural language processing,data mining and other fields.Traditional machine learning needs to do the following two points: first,based on the assumption of the same distribution,it requires the training set and test set data to be based on the assum-ption of the same distribution in order to achieve higher recognition purposes.second,it needs a large amount of labeled data,and it takes many iterations of the training process to achieve a better learning effect.However,in some fields(such as information security,medical images,etc.),there may be problems such as insufficient annotation data and out-ofdate annotation data.Few-shot learning provides an effective method to solve the above problems,and has become a research hotspot in the field of machine learning.The research purpose of the few-shot learning is to quickly complete the training with only a small number of labeled samples,and establish a model with better generalization performance.However,in practical applications,the existing few-shot learning methods still have some problems: algorithms trained with limited labeled data will cause over-fitting problems in the system,resulting in weaker model robustness;the existing few-shot learning methods are based on deep learning,with long training time and huge amount of iterative tasks;most of the existing models have weak feature extraction capabilities and insufficient support samples during training Therefore,how to make full use of training samples and effectively improve the generalization performance of the model is an urgent problem to be solved in few-shot learning.The dissertation first analyzes the research status and problems of few-shot learning.In view of the existing problems,three solutions are proposed.The main contributions of this dissertation are as follows::(1)For the problem of poor robustness of existing few-sample learning algorithms,we propose a Robust few-shot le Arn INg method Based on relati On network(RAINB-OW).Firstly,we use kernel density estimation and image filtering methods to add different types of random noise to the training set to form the support set and query set under different noise environments.Secondly,we establish parallel threads and train the relation network end-to-end to form multip-le heterogeneous base models.Finally,the probabilistic voting method is used to fuse the classi-fication prediction results of the last Sigmoid layer of the base model.Experimental results show that RAINBOW is more robust than the classification effect of relation network and other mainstream few-shot learning algorithm.(2)Few-shot learning has the problems of large amount of iterative tasks and long training process.We proposed a FAst few-shot learn Ing method based on p Rotot Ypical networks(FAIRY).Firstly,add different types of noise to the training set and divide it into a support set and a query set.Secondly,multi-threaded parallel training of the prototypical network to extraction of support set and query set image features.And according to the Bregman divergence,the class prototype of the support sample is calculated.Then,the L2 norm is used to measure the distance between the support and the query sample.The cross-entropy feedback loss modifies the model parameters to form multiple heterogeneous base classifiers.Finally,the relative majority voting mechanism is used to fuse the final nonlinear classification results.The results show that FAIRY has the advantages of fast convergence speed and high classification accuracy.(3)For the problem of weak feature extraction ability in few-sample learning and insufficient support samples,we propose a Cross mod Al ada PTive few-shot le Arn Ing based on Task depe Ndence(CAPTAIN).First of all,the ability of feature extraction and task representation is improved through the task adjustment network and auxiliary collaborative training.Secondly,based on visual and semantic intuition,semantic representation is added to each task,and the model can adaptively combine the two modalities.Finally,the metric scale has been adjusted to change the nature of the parameter update of the few-shot algorithm.A series of experiments show that CAPTAIN effectively improves the feature expression and extraction capabilities of all single-mode and mode-aligned fewsample learning methods,making the model more generalized.Experimental results show that in different classification tasks,RAINBOW,FAIRY and CAPTAIN have better classification accuracy than similar algorithms.
Keywords/Search Tags:Non-IID Learning, few-shot learning, Kernel Density Estimation, Image Filtering, Cross-modal learning
PDF Full Text Request
Related items