Open-domain dialogue systems need to deal with constantly changing dialogue topics and content,while maintaining dialogue fluency and readability.Today’s commercial open-domain dialogue systems are dominated by chatbots,which cannot accomplish specific tasks.The difference is that the task-based dialogue system can solve multiple tasks at the same time,but in an open environment,the task-based dialogue system needs to have the ability to quickly adapt to new tasks.In this regard,this paper proposes an iterative model update method based on continual learning.The method continuously executes the three steps of "detectionreview-model update".During this process,the system continuously learns new tasks while maintaining the processing power of the original tasks.This paper implements the basic task-based dialogue system with a pipeline structure,and studies different sub-modules in the above loop steps.Specifically,the contributions of this paper are as follows:(1)Propose an out-of-domain intent detection model based on selftraining framework.This model is applied to the "detect" step in the above method.During training,in order to solve the problem of shortage of outof-domain intent annotation samples,the model uses the self-training framework to effectively utilize a large amount of unsupervised data.At the same time,to learn more discriminative vector representations,the model improves the supervised contrastive learning loss.(2)Propose an open-domain named entity recognition model based on prompt learning.The model uses the masked language model to enhance entity boundaries,thereby solving entity recognition errors caused by outof-vocabulary.At the same time,the model achieves more efficient use of pre-trained models by utilizing the prompt method.Combined with the replay-based continual learning,the model can be further used in the"model update" step in the above-mentioned cyclic update method.The extraction and aggregation of vector space entity template representations makes the selection of typical samples in the replay process more efficient.(3)Study knowledge backward propagation in continual learning model.Based on the experimental results,a dialogue state tracking model that supports continual learning among different domains is implemented,which can be used in the "model update" step.In the exploration experiment,the knowledge propagation in the network is tracked by adding different forms of backward-side connections to the progressive neural network.Based on experimental results in natural language processing and image processing,general conclusions about knowledge transfer for continuous learning are drawn.At the end of the paper,combined with the above contributions and part of the rule design,this paper implements a simulated open-domain dialogue system based on continul learning,and gives detailed system design and usage instructions. |