Font Size: a A A

Improvement And Implementation Of Incremental Learning Method Based On R-EWC

Posted on:2021-07-06Degree:MasterType:Thesis
Country:ChinaCandidate:Y YuFull Text:PDF
GTID:2518306017959859Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Recently,in the case of big data and learning from data streams,incremental learning and online learning have received increasing attention,which conflicts with the traditional assumption of full data availability.In general,the traditional assumption is that training data for all tasks is always available.However,in practical applications.when constructing a unified vision system or gradually adding new functions to the system,if it involves online services that continuously input data streams.The number of tasks continues to increase,and it is not feasible to store and retrain such data.Incremental learning is considered to be a promising solution to the above practical challenges.Research shows that there is a basic difficulty in incremental learning:catastrophic forgetting.Adjusting the model based on new data usually results in severe performance degradation of previous tasks or categories.Biological organisms have the ability to continuously adapt to the environment and continue to learn.Catastrophic forgetting leads to the lack of performance of artificial intelligence.To this end,this paper proposes an improved incremental learning method based on R-EWC for catastrophic forgetting.R-EWC(Rotated-Elastic Weight Consolidation)is based on the consolidation of elastic weights and achieves increments by reducing the learning rate of important parameters of old tasks.However,R-EWC rotates the parameter space to better fit the task,and the incremental capacity is further improved.At the same time,our research shows that the weight change between the old data and the new data is the key reason for this problem.A certain degree of restriction on the weights may overcome this limitation and train a network that can maintain them for a long time without experiencing tasks.On the basis of consolidating R-EWC rotating elastic weights,we add constraints of multiple weights to eliminate the adverse effects of weighting levels and confusion,and selectively reduce the learning rate of important weights of tasks to remember the old tasks.Knowledge.By solving a set of classification tasks based on Mnist handwritten digits and Cifar data set,it proves that our method is effective.Using other methods as a benchmark for analogy,the method of multiple weight constraints has greatly improved the classification accuracy and robustness.In addition,this article also briefly analyzes the latest research status of incremental learning,and details the definition,classification and significance of incremental learning.The basic methods of incremental learning and the current advanced methods based on deep neural networks are summarized and prospected,which play a certain role in promoting the research of incremental learning.
Keywords/Search Tags:deep neural network, incremental learning, catastrophic forgetting, R-EWC, multiple weight constraints
PDF Full Text Request
Related items