With the unprecedented increase of computing budget and data availability,the depth model has achieved excellent performance in object recognition tasks.However,machine learning mechanisms remain incomparable with cognitive learning,which can not only acquire new knowledge continuously and retain most of the old knowledge that is frequently used,but also constructs high-precision recognition capability upon few annotated samples.Few-shot class incremental learning(FSCIL)is an emerging machine learning paradigm inspired by cognitive learning.Given base classes with sufficient training data and a small number of trainable samples from new classes,FSCIL trains a representation model using old classes and then continually adapts the model to new classes.However,FSCIL faces challenges beyond the traditional learning paradigm.On the one hand,fine-tuning the model with new classes will disturb the feature distribution of old classes,leading to catastrophic forgetting.On the other hand,only a few samples from the new class are used for training makes the model bias towards the old class,resulting in distribution collapse and model overfitting.This thesis focuses on the ability to perform incremental learning with a small number of samples and addresses the issues of catastrophic forgetting and overfitting.For catastrophic forgetting,this thesis proposes an incremental learning approach of Avoid Recent Bias Self-Learning Mask Partitioned Incremental Learning,referred to as ASPIL.ASPIL is a two-stage strategy that comprises alternate iterations of regional isolation and regional integration in order to accomplish continuous class incremental learning.Regional isolation uses pruning technology to isolate new learning process to avoid interfering with existing knowledge.Regional integration establishes a unified,high-precision cognition with feature mask and dual-branch information fusion.To evaluate the positive utility of the proposed methods above,ablation experiments are systematically performed on the incremental learning standard datasets,and compared with a series of state-of-the-art methods.The experimental results show that ASPIL improves the memory ability of the artificial neural network,and the recognition rate is increased by more than 5.27% on average compared with the latest well-known methods.Aiming at the overfitting problem of FSCIL and combining the above research results of catastrophic forgetting,this thesis further proposes a network of Meta Few-shot Class Incremental Learning Based On VAE Sampling Replay,called VMIL for short.During the incremental learning process,VMIL uses the deep network trained on the meta-training set to quickly adapt to new tasks,and accepts the constraints of distillation loss and margin loss to alleviate catastrophic forgetting and avoid inter-class confusion.VMIL also uses the feature distribution of hidden variables to recall old class samples.Hidden variables are dimensionality reduction by a variational auto-encoder and then stored to extend the feature representation capability of the incremental classes and reduce overfitting.The VMIL method proposed in this thesis provides a systematic solution to the catastrophic forgetting and overfitting.Experiments show that VMIL has significantly improved on the baseline method,achieving new state-of-the-arts. |