| Neural network have suffered from catastrophic forgetting problem since proposed.Adapting a well-trained model to new categories dataset by fine-tuning will make it fail on learned tasks.This problem makes the model training rely on large joint complete dataset,and prevents it from learning continually in a constant stream of incoming data.Especially for object detection tasks,the acquisition of training data for industrial and medical applications needs time-consuming manual labeling by professionals,which is hard to wait for.There are also situations where old datasets no longer available due to data copyright or data privacy issues.Existing object detection incremental learning(iOD)methods have a non-negligible forgetting effect.Yet the approach to extend detection models in pratical is still to push back and start over again.Therefore,the research of efficient incremental learning methods on object detection is necessary.The main work of this paper is as follows.Inspired by the independence of different detection results,this paper first introduces structure-based approach to iOD problem and proposes a novel iOD framework named MHD-PB,which is memory-free and detection model agnostic.It learns independent detection head for each incremental task with shared fix feature extractor to improve parameter efficiency.This paper additionally introduces piggyback mask to solve the problem of insufficient feature flexibility.The proposed framework is realized on a commonly used RCNN implemention.In VOC iOD experiment protocol,it outperforms existing iOD researches and even the commonly considered upper bound of structure-based method.This paper extends the analysis experiment of piggyback masks,proposes a neuron sharing degree metric,and explores the association between neuron activation and image semantics.This paper analyzes the deficiency of current loss design in iOD knowledge distillation methods and proposes a more efficient incremental learning regularization scheme for RCNN.This paper first introduces ROI box feature distillation for RCNN iOD,uses a more reasonable preprocessing method for box classification logits,and takes smooth L1 loss,which is outlier points insensitive,for box regression output.For a fair comparison,a small number of old classes samples are stored as memory.In this paper,Hybrid replay strategy is proposed to address the classes imbalance and enhance discrimination between old and new classes in iOD problem.The proposed method significantly outperforms current knowledge distillation and memory-based methods in VOC iOD experiment protocol.The ablation study demonstrates the effectiveness of each improvement in this paper,which provides an important reference for future iOD distillation loss design.This paper presents two iOD methods from different ideas,both independently surpassing the current state-of-the-art on VOC iOD experiment protocol.The methods in this paper show untapped potential and open up new routes for iOD research.In practice,MHD-PB does not conflict with knowledge distillation and various training strategies in this paper.They can be combined for better performance. |