Font Size: a A A

Research On Task-oriented Dialogue System Model Based On Sequence Learning

Posted on:2021-05-11Degree:MasterType:Thesis
Country:ChinaCandidate:B YuFull Text:PDF
GTID:2428330614960432Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the continuous development of voice interaction technology,people always want to have an intelligent assistant,which can be as conscious and emotionally as a person,can talk with people and complete tasks,such as weather inquiry,hotel reservations and travel arrangements.This paper conducts related research on the problem that the task-oriented dialogue system model based on sequence learning cannot effectively use the external knowledge base(KB).The main works of this paper are as follows:(1)Since the Seq2 Seq model cannot explicitly model external data retrieval,which makes the model difficult to generate information stored in external KB,this paper proposes a task-oriented dialogue system model based on memory to sequence learning.This model introduces a memory network in the encoder-decoder architecture to store,query and reason about external KB.In the decoding stage,the model uses a memory network and LSTM to jointly decode two probability distributions.One is the global vocabulary distribution,and the other is the distribution of the memory network content.Finally,a gate unit is used to select the most appropriate probability distribution to generate the output at the current moment.Experiments show that the model proposed in this paper can effectively enhance the utilization of external KB,improve the entity accuracy in the system response,and thus improve the task completion of the system.(2)Since most current task-oriented dialogue systems based on memory store the dialogue context and KB in the same memory,the query and reasoning of the memory KB become difficult.This paper proposes a sequence learning model based on multi-level memory and copy enhancement,which uses multi-level memory to store,query and reason KB separately.Our model abandons the usual triple KB representation and presents all the contents of each KB message in the form of key-value pairs.In addition,we also introduced a copy mechanism in the model decoding process,directly copying related entities from the dialogue history as output.Experiments show that the sequence learning model proposed in this paper can effectively improve the entity accuracy of dialogue.Compared to the best-performing model in the baseline model,the model in this paper achieved a performance improvement of up to 12%.
Keywords/Search Tags:Task-oriented dialogue system, KB, Seq2Seq model, Memory network, Multi-level memory
PDF Full Text Request
Related items