Dialogue generation is an important direction in natural language processing,which aims to enable computers to engage in natural and fluent conversations like humans.With the development of neural network technology and large-scale datasets in recent years,dialogue generation models can learn and generate natural language from massive corpora,having high practical value and academic research significance.In recent years,researchers have discovered that incorporating external knowledge can improve the performance of open-domain multi-turn dialogue generation models.This is because it can not only provide abundant background information,but also alleviate the common issues in generated models,such as repetitive response styles,insufficient information,and irrelevant content to the context.Existing research generally enhances the understanding and expressive ability of the dialogue system by introducing large-scale corpora or utilizing existing knowledge bases,thereby improving the naturalness and fluency of the generated responses.This thesis conducts in-depth research on knowledge-enhanced open-domain multi-turn dialogue generation,focusing on addressing the following issues.The first objective is to effectively utilize the structural relationships within external knowledge bases,which enables the dialogue system to understand dialogue context and specific concepts better and provides more relevant and information-richer responses.The second objective is to design more effective model structures to ensure logical coherence and accuracy of knowledge in generated responses.Finally,it is also essential to address uncertainty and noise interference in real-world scenarios to enhance the generalization and adaptability of the dialogue system.To be more specific,the research contents of this thesis are as follows.(1)A Relation Transition aware Knowledge-Grounded Dialogue Generation model(RTKGD)is proposed.Specifically,inspired by the latent logic of human conversation,this model integrates dialogue-level relation transition regularities with turn-level entity semantic information.In this manner,the interaction between knowledge is considered to produce abundant clues for predicting the appropriate knowledge and generating coherent responses.The experimental results on both automatic evaluation and manual evaluation indicate that the proposed model outperforms state-of-the-art baselines.(2)A Key Information-focused Contrastive Learning for Robust Dialogue Generation Framework(KI-CL)is proposed,aiming to improve the representation learning ability of the model in the face of natural perturbations in real-world conversations.Specifically,a data augmentation method is designed to construct positive and negative samples from dialogue data.With contrastive learning,the model can focus on key information within the dialogue context and generate reasonable and accurate responses.Experimental results demonstrate that the proposed method can significantly improve the robustness and generalization ability.Moreover,due to its compatibility with different pre-training models,this framework has strong flexibility and scalability.In summary,this thesis proposes novel ideas and methodologies for the task of knowledgeenhanced open-domain multi-turn dialogue generation,which significantly enhances the abilities of dialogue generation models in both semantic understanding and knowledge representation.The research results offer valuable insights for future studies in related fields and hold significant practical implications. |