Font Size: a A A

Research On Text Inferencing Algorithms Based On Deep Learning

Posted on:2021-04-26Degree:MasterType:Thesis
Country:ChinaCandidate:W LiFull Text:PDF
GTID:2428330623467763Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
In the past decades,with the rapid development of artificial intelligence technology based on deep learning,the amount of data in text form has been increasing.Natural language processing,which involves how computers understand and generate language,is supported by global development and has become a research hotspot in today's technical discussions.For a long time,most methods used to study natural language processing problems have used shallow machine learning models.However,with the popularity and success of word embedding,sequence-to-sequence models,attention mechanisms,and pre-trained language models,neural network-based models have achieved excellent results in various natural language processing tasks.Natural language inference tasks need to solve problems such as semantic understanding,sentence representation,and associative reasoning.It is a very basic task in natural language processing.At the current stage,the deep understanding of sentences in natural language inference tasks still faces many challenging problems.Based on the neural network in deep learning and the attention mechanism,this paper explores and studies the natural language reasoning methods.The main research work is as follows:1.We introduce the characteristics of natural language meaning understanding and two common sentence representation methods.Aiming at the problem that these two methods cannot be calculated in parallel or have many parameters,we proposed a sentence representation reasoning method based on self-attention mechanism.The method includes four parts: word embedding layer,sentence coding layer,semantic extraction layer and aggregation output layer.This method fuses the information of words and characters in the embedding layer to enrich the feature information,and uses the self-attention mechanism in parallel to extract the semantic information of the sentence context in the encoding layer,and extracts the specific information of different parts of the sentence to comprehensively represent the semantics of the sentence in the semantic extraction layer.Experiments show that the accuracy of this method on the SNLI dataset is 86.8%.Compared with BiLSTM and other methods,the method have improved the accuracy of inference results on some level2.We introduce the basic structure of interactive reasoning framework,and then design an interactive reasoning network based on syntactic information.The network includes a word embedding layer,an interactive matching layer,a syntactic information extraction layer,a global information fusion layer,and a prediction layer.The core interactive matching layer of this network structure improves the decomposable attention model without considering the order of words and context,and uses bidirectional long short term memory network for sequence encoding.At the same time,the Gumbel TreeLSTM model is used in the syntactic information extraction layer to construct a syntactic structure tree to extract the hierarchical information of words.Merge local inference information and syntactic structure information to construct global feature information can identify semantic reasoning relationships in sentences at a deeper level.In addition,the semantic roles of words are marked to enrich the semantic feature information of the sentence.The accuracy of this method on the SNLI data set is 88.9%.Compared with other methods,it can understand semantics at a deeper level.
Keywords/Search Tags:Natural language inference, Sentence representation, Syntactic information, Interactive reasoning, Self-attention mechanism
PDF Full Text Request
Related items