Font Size: a A A

A Study Of The Catastrophic Forgetting Problem In Multi-task Continuous Learning With Neural Networks

Posted on:2022-08-06Degree:MasterType:Thesis
Country:ChinaCandidate:Z B GuoFull Text:PDF
GTID:2518306521490654Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
Human beings can learn and adjust knowledge constantly in their whole lives and retain previous knowledge while learning new knowledge,which is essential for automatic learning systems in the real world.For now,machine learning has achieved remarkable progress on various tasks and surpassed humans in many places.However,it can continuously learn for a lifetime like humans,which is a long-term challenge.At present,there is a disadvantage for most DNNs,i.e.,this kind of model can only learn the weights from the fixed batch for training.When there are multiple tasks because the optimization objectives are often different,even if they are the same,the data set is also different.The old weights may be covered while learning,leading to Catastrophic forgetting.In particular,for the sample data of different poses(such as multi-angle global transformation)of multi-task,the network learning ability is insufficient,the robustness is weak,and the forgetting phenomenon is serious.As the number of tasks increases,the network learning ability decreases.Aiming at these problems in continuous learning of network,proposed two solving models: DMR-CN and HATGAL.In the DMR-CN model,a new dynamic storage routing strategy is proposed,which controls the forward path,according to the current input set,of the Capsule Network(Caps Net).In order to recall previous knowledge,a binary routing table is maintained between these successive tasks.After that,the increment of online competition prototype clustering is used to update the route of the current task.In addition,a sparsity measure is used to decouple the significant routing between different learning tasks.The HAT-GAL model was proposed based on the HAT(hard attention to the task)memory network,combined with the multi-task continuous learning method of generative adversarial learning(GAL)to improve the "memory" of the model.Via embedding the GAL into the fully connected layer of the HAT network to solve the problem,i.e.,the network's learning ability decreases as the number of tasks increases and the robustness to multi-posture data are insufficient.At the same time,using the evolutionary strategy to optimize the network and parameters,which can effectively alleviate the problems that parameter redundancy,easy to fall into local optimality,and coverage of previous knowledge which are leads by the increasing tasks.For DMR-CN,there is a good effect on the multi-pose transformation of highresolution images.Besides,when the number of tasks increases to ten,it can keep a good test accuracy for previous tasks and good learning ability for new tasks.HATGAL owns good universality,high efficiency,and better robustness for small resolution images and is suitable for various small data sets.
Keywords/Search Tags:multi-task continuous learning, neural networks, catastrophic forgetting, generative adversarial learning, evolutionary strategy
PDF Full Text Request
Related items