Font Size: a A A

Intergrating General Domain Knowledge Into Multi-turn Dialogue

Posted on:2023-09-10Degree:MasterType:Thesis
Country:ChinaCandidate:H J RenFull Text:PDF
GTID:2568306845991089Subject:artificial intelligence
Abstract/Summary:PDF Full Text Request
With the continuous development of knowledge-driven multi-turn dialogue systems,high-quality dialogue data with multiple topics and knowledge annotations continue to emerge.Existing work shows that the introduction of external knowledge can significantly improve the quality of the dialogue generated by the model,but the current knowledge-driven multi-turn dialogue systems still have drawbacks on inaccurate knowledge utilization and insufficient extraction of core information in knowledge in the way of knowledge fusion.Targeted at the aforementioned issues,this paper conducts research on in-domain knowledge fusion methods for multi-turn dialogues,focusing on how to improve the utilization of knowledge in the dialogue system,and thereby to improve the quality of the generated responses.On the one hand,this paper leverages the idea of multi-task learning to enhance the model’s ability to model knowledge information,by combining the dynamic weighting method to help the model dynamically perceive knowledge and context information;On the other hand,this paper uses the convolutional neural network to extract the key information from the knowledge,to alleviate the problem that the model fails to fully utilize the core information in the knowledge.The main contributions of this paper can be summarized as follows:(1)Targeted at the inaccurate utilization of externally introduced knowledge by multi-turn dialogue systems,this paper proposes a knowledge integration method based on multi-task learning and knowledge perception.This method uses a selector to dynamically weight the knowledge vector and context information vector,so that the decoder can selectively concentrate on knowledge information and context information.In addition,the auxiliary task that uses knowledge alone to generate responses is jointly trained with the main task,which enables the decoder to better model the knowledge information,to improve the quality of the model’s responses.(2)Targeted at the problem of incomplete extraction of core information from knowledge in multi-turn dialogue system,this paper proposes a knowledge integration method based on the knowledge convolution.This method constructs knowledge vectors by using the attention mechanism,and then performs convolution and pooling operations on the knowledge vector to extract the key information from the external knowledge,After that,the extracted information is decoded together with the context and knowledge information.In the end,the quality of responses generated by our model can be significantly improved and the key information in the knowledge can be exploited more sufficiently.
Keywords/Search Tags:Multi-turn dialogue, Knowledge perception, Knowledge convolution, Multi-task learning
PDF Full Text Request
Related items