Font Size: a A A

Research On Natural Language Understanding In Task-Oriented Dialogue System

Posted on:2021-03-11Degree:MasterType:Thesis
Country:ChinaCandidate:Z W ZhangFull Text:PDF
GTID:2428330629488909Subject:Engineering
Abstract/Summary:PDF Full Text Request
Human-computer dialogue technology is a hot topic in natural language processing.User intent classification and semantic slot filling are the core tasks of dialogue understanding,and which has attracted more and more attention from academia and industry.In recent years,neural networks and deep learning have been applied in natural language understanding task and achieved significant progress.However,there are still some challenges: 1)the gradient problems of recurrent neural network restricts the model's performance;2)the single context information cannot fully express the semantic context of sentence;3)most approaches separately process the user intent classification and slot filling,which ignoring the relations and differences between these tasks.Therefore,this paper focuses on the study of natural language understanding in task-oriented dialogue systems,and conducts research on semantic understanding,word attention mechanism,semantic fusion,multi-task learning and transfer learning.The main contributions of this article are summarized as follows:Firstly,to address the issue of low contribution of domain-related words for the sentence semantic representations encoding and the gradients problem of recurrent neural networks,we proposed an approach based on independent recurrent neural network(IndRNN)and word-level attention mechanism,so as to boost model performance.We use a multi-layer IndRNN network to extract the semantic representations of dialogue records and introduce a word-level attention mechanism to compute the contribution weight of domain-related words.Experimental results on the SMP2017-ECDT,a Chinese multi-domain task-oriented dialogue dataset,show that our model achieved the best overall performance and take a significant improvement on the 31 categories.Secondly,to address the problems lied in current slot filling models such as the low efficiency slot boundary recognition and inadequate context semantic modeling,we proposed a semantic slot filling method that fusion of local semantics and global structural information.We take the global Self-Attention mechanism to learning the global structure information and use CNN to extract the local semantic information of the sentence.Experimental results on the task-oriented dialogue benchmark dataset ATIS show that our model outperforms other slot filling models and achieved the best results.Thirdly,to address the problems of relations modeling between intent and slots,and the knowledge transferring,we proposed a joint learning framework with the bidirectional pre-trained language model BERT.We design a two-stage learning framework based on Encoder-Decoder,and proposed an intent-reward mechanism based on the specific slots.This model established an explicit relationship between the user intent and semantic slots and provides a new interpretable horizon for the dialog understanding.By incorporating BERT into our model,the target task is better initialized and enriches the language contextual knowledge.
Keywords/Search Tags:Task-Oriented Dialogue System, Natural language Understanding, Intent Classification, Slot Filling, Deep Learning
PDF Full Text Request
Related items