Font Size: a A A

Research On Diversity Enhancement Model Of Dialogue System Based On Deep Learning

Posted on:2021-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:H YuanFull Text:PDF
GTID:2428330632453268Subject:Industrial engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of deep learning technology,there are more and more scenarios in which deep learning technology is combined with natural language processing technology,in which dialogue systems become the current research hotspot.Widely used in areas such as Q&A,customer service,and smart devices,dialogue systems bring new ways to interact and reduce the cost of service delivery for businesses.Compared to retrieval dialogue,generative dialogue has strong crossdomain and generalization capabilities and is more suitable for open domain scenarios,so this paper focuses on generative dialogue tasks in open domains.Seq2Seq models are currently commonly used to build dialogue systems,but such models suffer from unlearned long-term dependencies and high repetition rates of secure responses,resulting in low response generation diversity.There is therefore an urgent need to increase the diversity of responses generated.This paper addresses this issue by focusing on the following.In this paper,a Transformer dialogue model based on the separation of Context mechanism and CVAE is proposed to improve the diversity of responses generated by the dialogue system.First,in response to the inability of existing models to explicitly address the differences between contextual discourse and source discourse,the model proposed in this paper fully explores the different degrees of influence of contextual discourse and source discourse on generating responses by separating contextual mechanisms.Second,the model uses the CVAE structure to introduce hidden variables through which the characteristics of the potential semantic distribution are obtained.To address the drawback that existing RNN or LSTM-based CVAE structures cannot capture longer-term dependencies,the model in this paper uses Transformer's fusion of CVAE structures to introduce tacit variables,which not only captures the ability of longterm dependencies,but also obtains a richer potential semantic distribution through the tacit variables,thus increasing the diversity of responses generated by the dialogue model.The final experimental results on the data set of multiple rounds of dialogues compared with the existing model suggest that the improved mechanism and improved model proposed in this paper can effectively enhance the diversity of responses generated by the dialog system.
Keywords/Search Tags:Generative dialogue, dialogue diversity, Deep learning, CVAE
PDF Full Text Request
Related items