Font Size: a A A

Research On Incremental Learning Based On Convolutional Neural Networks

Posted on:2021-03-27Degree:MasterType:Thesis
Country:ChinaCandidate:L Q QiFull Text:PDF
GTID:2428330605950616Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
Image classification is one of the most popular research directions in the field of deep learning.When the amount of data is growing and new categories of data are emerging,traditional image classification algorithms are facing new challenges.Although the classification algorithm based on neural network has greatly improved the accuracy of image classification compared with the traditional algorithm,the overall performance of the network will decline sharply when learning new categories incrementally.In view of the above problems,this paper studies the augmentation of incremental training set and the improvement of incremental learning ability of network,and proposes the incremental learning algorithm based on optimizing reserved set and the tree-like incremental learning network.1.Incremental learning algorithm based on optimizing reserved set.In order to solve the problem of serious loss of classification accuracy in incremental learning of convolutional neural network,an effective method is to use the new sample and the old sample to train the new model jointly.Through the retraining on the reservation set,the old knowledge can be reviewed.In order to maximize the representation of the old sample set,the reserved set consists of three parts:using the SVMT structure,obtaining the samples of different class sample on the decision-making boundary;selecting the samples closest to the class center;using the k-means algorithm to cluster the current class samples,and selecting the samples of the cluster center.Through the train on the reserved set,the model can keep the old classes information;through the train on the new classes,the model can learn the new classes' information.The results of experiment on CIFAR-100 and ILSVRC2012 datasets show that the algorithm can effectively alleviate the loss of classification accuracy in incremental learning process and improve the classification accuracy of the incremental learning.2.Tree-like incremental learning network.In the process of incremental learning,the train of new class samples will change the network parameters and reduce the network's ability to classify the old classes.In order to solve the problem of serious loss of classification accuracy of old samples,when incremental learning is processing on the fixed structure neural network,this paper proposes a dynamic structure incremental network model.When the network is processing incremental learning,a network branch composed of a small number of convolution layers is added to a part of the underlying network,and the parameters of the old network branch are fixed to save the classification information of the old category.By calculating the distillation loss on the multi-branches of the old network model,knowledge can be transferred from the old model to the new model;by calculating the classification loss on the new branch,the model can learn the new classes-information;by fusing and adjudicate the multi-branches features on the new model,the model can classify the sample.Finally,the accuracy of the algorithm is improved by balancing data and fine-tuning model.The incremental network proposed in this paper can realize the end-to-end training mode and it is easy to realize incremental learning.The results of experiment on CIFAR-100 and ILSVRC2012 datasets show that the network model proposed by the algorithm is an effective incremental learning network;the algorithm can effectively improve the classification ability of convolutional neural network in incremental learning,and effectively alleviate the loss of classification accuracy in incremental learning.
Keywords/Search Tags:convolutional neural network, knowledge distillation, incremental learning, image classification
PDF Full Text Request
Related items