Font Size: a A A

Research On Recognizing Textual Entailment Based On Dynamic Multiway Attention And Semantic Relations Of Words

Posted on:2021-04-30Degree:MasterType:Thesis
Country:ChinaCandidate:X Q WuFull Text:PDF
GTID:2428330605982448Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Recognizing textual entailment is a core and challenging task in natural language comprehension,which aims to enable computers to deeply understand text information.As many natural language processing tasks need to solve problems by using the text containing implied relations,it is widely used in the question answering system,relationship extraction,machine translation and other tasks.In the early stage,researchers mainly focused on the methods based on statistics and rules to solve the recognizing textual entailment task.The quality of artificial features in such methods will greatly affect the final result,which in turn depends on people's experience.With the development of deep learning,the recognizing textual entailment model based on deep neural network has solved the above problems and made many breakthroughs.Based on the analysis and study of related research work,aiming at the problems of existing models,this paper proposes a natural language reasoning method based on deep learning to solve the problems in the existing models.The main contents are as follows:The existing method is usually to apply an attentional encoding to a sentence and then pass the learned sentence representation vector to the prediction layer.This will lead them to fail to obtain a more comprehensive sentence representation vector by relying on only one attentional mechanism.In order to solve this problem,this paper proposes a natural language inference model based on multi-way dynamic mask attention.This paper uses multi-attention encoder to build multiple models of sentences,so that the model can make better use of the information of different word levels in sentences,The dynamic mask selector is used to adjust the mask to ensure that the attention mechanism can focus on the important reverse information on the basis of temporal modeling.Here,this paper uses reinforcement learning to solve the problem of mask selection for dynamic attention.Experimental results show that the model presented in this paper is significantly improved over the benchmark model in terms of publicly available natural language inference data sets.In many cases,it tends to encounter data sets with small size and inconsistent data quality,which makes it difficult for the model to obtain all the knowledge needed for natural language reasoning from these data.To solved the problem,this paper proposes an attention convolution neural network model,which fuses lexical meanings,embeds the semantic relationship of words in WordNet into the word vector of GloVe,pays attention to the features between sentences by using attention mechanism,extracts sentence features by using attention convolution,and obtains the final result through the inference layer.The experimental results show that the accuracy of the model in this paper is 89.4%on the SNLI dataset.Especially when the dataset size is small,The model in this paper improves the accuracy by 9.3%compared with the model of adverse semantic relation.
Keywords/Search Tags:Recognizing textual entailment, Convolutional neural network, Reinforcement learning, Attention mechanism, WordNet
PDF Full Text Request
Related items