Font Size: a A A

Research On Dialogue Generation Method Applied In Customer Service

Posted on:2020-02-17Degree:MasterType:Thesis
Country:ChinaCandidate:B Y WuFull Text:PDF
GTID:2428330623963578Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Turing test is the jewel in the crown of artificial intelligence,and dialogue is an implementation of the Turing test.This paper research on dialogue generation method and applies the dialogue generation model in customer service.Firstly,a dialogue database is established which contains more than 3.25 million dialogues.Then,a dialogue generation model is constructed based on supervised learning.The model consists of an encoder and a decoder where the encoder encodes the input sequence into an intermediate semantic vector,and the decoder decodes the intermediate semantic vector into an output sequence.Meanwhile,the attention mechanism and copy mechanism are merged and added to the dialogue generation model.These two mechanisms effectively improve the dialogue generation quality of the Seq2 Seq model.Secondly,in view of the fact that Chinese can be represented by Pinyin,a Chinese dialogue model using Pinyin to reduce dimension is proposed.This model takes Pinyin as input and divides it into initial,final and tone three parts,thereby reducing the input dimension.The Pinyin information is then transformed into image form using embedding method,and the Pinyin features based on single word and context are extracted by full convolution network and bidirectional LSTM network respectively.Using Pinyin to reduce dimension effectively reduces the space and time complexity of the dialogue model,which is very helpful for building a large Chinese dialogue model.Finally,we propose a dialogue generation model which apply generative adversarial network to reinforcement learning.In this model we cast dialogue generation task as a reinforcement learning problem where we jointly train two systems,a generator to generate dialogue,and a discriminator to calculate the reward of generated dialogue.The rewards come from the discriminator are judged on complete sentences,and are matched to every single word using Monte Carlo search.GAN has limitation when facing discrete data because gradient from the discriminator cannot pass back to the generator while the output of generator is discrete.To solve this problem,we apply policy gradient to reinforcement learning to back propagate the gradient.We also propose a decreasing teaching force rate to solve the exposure bias problem,which improves the quality of dialogue generation.In order to evaluate the dialogue effectiveness,the models are tested on the established dialogue database with BLEU,ROUGE and discriminator scores as evaluation indicators.The experimental results show that the dialogue generation models proposed in this paper can effectively generate human-like answers based on questions.
Keywords/Search Tags:dialogue generation, Pinyin feature, reinforcement learning, Generative Adversarial Network, decreasing teaching force rate
PDF Full Text Request
Related items