| In recent years,with the rapid development of deep learning technology,convolutional neural networks(CNNs),as the core of new intelligent vision,have achieved remarkable results in various recognition tasks,significantly improving the performance of visual perception models on complex large datasets.The deep learning paradigm includes two stages:knowledge training and pattern inference.The model needs to be trained on sample data that satisfies the independent and identically distributed(i.i.d)assumption before it can be used for inference testing.However,this paradigm is likely to be affected by factors such as hardware storage and privacy security when dealing with new data that does not belong to the training distribution(i.e.,out-of-distribution data,OOD),making it difficult to store a large amount of historical data for joint optimization of the model.At this point,directly using new data to fine-tune the original model will lead to a sharp decline in the representation performance of old training data(i.e.,in-distribution data).To solve the above problems,there is a need to develop the continual learning ability of intelligent models,namely,to reduce the probability of forgetting old knowledge while improving the continual representation ability of new knowledge in the process of expanding applications.Existing continual learning solutions often trade high-performance continual representation ability for static high storage and dynamic high computation,which is not conducive to the deployment of intelligent vision models in real-world scenarios.To achieve dynamic and efficient continual learning,this thesis summarizes the challenges faced by intelligent vision models in three stages:basic representation learning,incremental representation,and deployment optimization.Firstly,in the initial representation learning stage,it is difficult to balance the feature discriminability based on old training data and the representation scalability for new data distributions.Secondly,in the incremental representation expansion stage,it is difficult to balance the stable improvement of overall performance and the flexible and lightweight expansion structure.Finally,in the deployment representation modulation stage,it is difficult to ensure the high-speed dynamics of data environment and the real-time controllability of feature updates.To address the above-mentioned challenges,this thesis will work on the following aspects:(1)To address the problem of lack of inter-class extension in the process of basic representation construction,a representation enhancement scheme based on selfpromoted prototype refinement is designed.This work explicitly models the mapping relationships among semantic combinations by adopting a random episode selection strategy,forcing the adaptation of old inter-class distribution relationships to different simulated incremental processes.A dynamic relation projection module is introduced to extract an completely optimized relationship matrix,which achieves dynamic updates from old class representation to new class prototypes.To directly verify the extensibility of the enhanced initial representation in the continual learning process,analytical experiments are conducted on the standard few-shot incremental benchmark that can reduce new class optimization interference in incremental phases.(2)To address the problem of balancing high performance and low computation during the process of incremental structure expansion,a representation extension scheme based on self-sustaining structure expansion is designed.This work implements the fusion of branch expansion and main branch distillation through a structural reorganization strategy,which promotes the update of new features while ensuring the representative feature transferability.A prototype selection mechanism is used to suppress the participation of easily confused samples in the distillation process,thereby enhancing the discriminability of old and new class features.Experimental verification on class-incremental benchmarks in continual learning shows that this method can effectively maintain the learning ability of new classes while significantly reducing old class forgetting and computational costs in the face of task expansion.(3)To address the problem of high-speed dynamic changes in the data environment during continual system deployment,a representation modulation scheme based on self-paced imbalance rectification is designed.This work analyzes the heterogeneous responses of the incremental representation to different input data,and proposes a frequency compensation strategy.The strategy uses the number ratio of new and old class samples to guide the optimization of output distribution margin,thereby achieving dynamic adjustment of inter-class gradient contributions.The experimental results from the imbalanced perturbations on the standard class-incremental benchmarks in continual learning show that this method can effectively improve the update stability of the system when facing high dynamic data.(4)To address the problem of lack of controllability in the real-time feature updates during continual system deployment,a representation modulation scheme based on self-organizing pathway expansion is designed.This work numerically demonstrates the positive correlation between neural pathway coupling and old knowledge forgetting,and proposes a controllable representation adaptation strategy based on neural pathways.By decoupling the optimized pathways and features of different classes,it enhances the system interpretability for different inputs.In addition,the effectiveness of different samples in the optimization process is explicitly measured by the degree of pathway overlap,achieving a real-time adaptive updating process for the representation.Experimental results on class-incremental benchmarks in continual learning show that this method can effectively improve the controllability of the system when facing real-time representation optimization.Based on the above ideas,this thesis investigates the dynamic visual representation methods for continual learning,which enhance the dynamic efficiency of the continual learning system from three aspects:representation enhancement,representation expansion,and representation modulation.By comparing with previous work on multiple standard incremental benchmarks,the effectiveness and superiority of the proposed methods have been verified,providing new perspectives and ideas for representation learning research in continual learning scenarios. |