Font Size: a A A

Natural Language Inference Based On Seq-Tree Encoder With Syntax Information

Posted on:2019-01-10Degree:MasterType:Thesis
Country:ChinaCandidate:S Y LiuFull Text:PDF
GTID:2348330542493499Subject:Control theory and control engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of the Internet,a large number of text data are continuously gen-erated every day in the world.These text data have many kinds of manifestations.This has caused enormous difficulties for computers in processing these natural language data.The study of natural language inference in this paper is a relative study in the field of natural language processing.It is the basis of algorithms such as machine translation,machine reading and machine querying.In the past few decades,the study of natural language inference based on manual features has dominated this field,but with the development of computing and artificial intelligence algorithms,the deep learning algorithm has been applied to all aspects of natural language processing,including natural language reasoning.Deep learning greatly improves the level of natural language inference and indirectly promotes the development of other fields of natural language processing.This paper presents a natural language inference method based on the sequence-tree coding model with syntax information.First of all,this paper presents a POS distributional expression vector to represent the POS information in the text that solve allows words with multiple parts of speech to be more effectively expressed and adding the POS information to word embedding.Secondly,this paper use a Bi-LSTM network to encode the texts.At the same time,this paper uses tree-LSTM network to encode the dependency tree of the texts which allows the encoded sentence vector to have syntactic information.Through the combination of Bi-LSTM network and tree-LSTM,the text is encoded into a word vector with syntax information and POS information.Finally,through the Sentence Fusion model presented in this paper,we merge premise and hypothesis into a feature vector to complete the task.The model is trained on Stanford natural language inference datasets and achieved good performance on the test set.
Keywords/Search Tags:Natural Language Inference, Long-Short Term Memory, Bi-directional Recurrent Neural Network, Dependency Parse, Part of Speech
PDF Full Text Request
Related items