Font Size: a A A

Deep Incremental Learning Via Knowledge Accumulation

Posted on:2023-03-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:K WeiFull Text:PDF
GTID:1528306905996569Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
Incremental Learning is one of the most essential research directions for artificial intelligence,whose goal is to make the model can continually learn several new tasks without the data of previous tasks.With the rapid development of artificial intelligence,incremental learning becomes the motivation to apply the algorithm in real-world situations,and attracts the attention of both industry and academic fields.With the impressive development of Incremental Learning,deep methods can continually learn several tasks and obtain satisfactory results with a little cost of computation and memory,but still face many theoretical problems and key technologies,such as Catastrophic Forgetting attached with Incremental Learning.Thus,this paper aims to make the deep methods can learn several tasks continually with knowledge accumulation and explore the pattern,where Incremental Learning can be combined with other applications.The contributions of this paper can be summarized as:1.We propose a novel Zero-Shot setting with the corresponding evaluation metrics and method,named Incremental Zero-Shot Learning.Existing Zero-Shot Learning methods are trained in pre-trained datasets and cannot obtain the performance increase without learning the previous data,which limits the application of Zero-Shot Learning methods in real-world situations seriously.To address this problem,this paper proposes a class incremental zeroshot recognition method,which can learn several tasks continually and increase the ability to recognize unseen classes progressively: first,generative replay method is constructed to generate the data of previous tasks,and the replayed data is used for the training stage of current task,which converts Incremental Zero-Shot Learning as supervised Zero-Shot Learning to some extent;then,knowledge distillation strategy is leveraged to constrain the update of model parameters,and preserve the projection between the input and output,translating the knowledge of previous tasks to the current task;finally,the proposed methods can be flexibly combined with several Zero-Shot Learning methods,leading the methods can learn continually.2.We propose a more challengeable Zero-Shot setting,named Lifelong Zero-Shot Learning,which can improve the practicability and expansibility of Zero-Shot Learning methods.In real-world situations,the data of different situations are complex and exists domain gaps,leading Zero-Shot Learning methods cannot preserve the ability of previous tasks when adapting to new situations.To address this problem,this paper proposes a novel domain incremental zero-shot recognition method,which can learn several datasets continually without relearning previous datasets: first,unified representation modules are constructed to make the model can learn several datasets with different data forms and attributes;then,selective retrain strategy is leveraged to constrain the update of essential parameters to preserve the knowledge of previous tasks;finally,knowledge distillation strategy is leveraged to translate the knowledge of previous tasks to the current task,achieving the knowledge accumulation of different datasets.3.We propose a class-incremental learning method based on zero-shot transfer.When deep methods learn several tasks sequentially,the parameters of model will change continually and the feature representations between two adjacent tasks are different,named semantic gap,leading the model to forget the knowledge of previous tasks.To address this problem,this paper proposes a novel class-incremental learning method based on zero-shot transfer.First,zero-shot transfer is leveraged to search a common semantic space,and the feature representation from two adjacent tasks are aligned in this common semantic space,alleviating catastrophic forgetting problem;then,the features in common semantic space are regularized and represented again,which preserves the discrimination of these features;finally,the proposed method can be combined with regularization-based incremental methods to alleviate catastrophic forgetting further.4.We propose a class-incremental learning method based on disentangled representation.When deep method learns several tasks and the parameters of model change sequentially,the model will suffer semantic gap problem.However,only transferring and aligning features cannot alleviate semantic gap and catastrophic forgetting problem,where the features in common semantic space contain abundant class-unrelated information.To address this problem,this paper proposes class-incremental learning method based on disentangled representation: feature disentangled representation is designed to decouple the feature as class-disentangled features and task-disentangle features,removing the redundant information;in addition,transfer network based on attention mechanism is constructed to guide the feature transfer and preserve the discriminative information,pushing the feature alignment and alleviating catastrophic forgetting.
Keywords/Search Tags:Incremental Learning, Catastrophic Forgetting, Semantic Gap, Zero-Shot Learning
PDF Full Text Request
Related items