Font Size: a A A

Deep Learning Based Open Domain Dialogue System Key Technology Research

Posted on:2022-11-09Degree:MasterType:Thesis
Country:ChinaCandidate:Z L ZhuanFull Text:PDF
GTID:2518306749972049Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Building humanized dialogue systems is very challenging among a wide variety of natural language processing tasks,and open domain dialogue systems are of utmost importance in this research.How to make an open-domain dialogue system capable of generating a variety of responses and how to make the dialogue system consistent in its role during the dialogue process have been important problems to be solved in the research process.In this thesis,we proposed the following targeted improvement solutions to address the above two problems.(1)To address the response diversity problem of open domain dialogue systems,a two-stage dialogue generation model(CA-VAE)based on conditional adversarial learning in latent space was proposed in this thesis.The model first performs the latent representation learning of sentences through variational autoencoders,and then adds the latent variables of dialogue contexts to the latent space adversarial learning,so that the latent representation of dialogue generated by the inference network of the model can be highly correlated with the dialogue history.Finally,the potential representations of the dialogues will be passed through the generative network to generate the responses of the dialogues.The relevant experimental results show that the model in this work has a 7%increase in diversity compared to the baseline model,indicating that it can generate more contextually relevant,fluent and richly diverse responses.(2)For the reply persona consistency problem of open domain dialogue systems,this thesis proposed a Transformer-based persona consistent dialogue generation model(PT-CVAE).The approach utilizes the Transformer structure to construct a conditional variational autoencoder model.The dialogue content representation with persona information bias was obtained by fusing the dialogue content representation with the persona information representation through the persona information fusion attention module,and further mapping the representation into a latent variable representation of the latent space through the prior network.Then,the latent variables will be involved in controlling the generation of dialogue responses by both direct fusion and pseudoattention.Experiments show that the proposed model not only achieves a 6%improvement in role consistency,but also achieves good performance in sentence fluency and relevance compared to the responses generated by other baseline models.
Keywords/Search Tags:Dialogue system, Dialogue generation, Persona Consistency, Variational Autoencoder, Transformer
PDF Full Text Request
Related items