Font Size: a A A

Causality Extraction Based On Capsule Network With Self-Attentive Encoder

Posted on:2021-04-21Degree:MasterType:Thesis
Country:ChinaCandidate:P LiuFull Text:PDF
GTID:2428330629452721Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Language of cause and effect captures an essential component of the semantics of a text.Automatic identification of causal information in sentences is an important task of natural language processing.Recent studies have proven that causal information extraction can facilitate the solution of various machine learning problems,including semantic analysis,question answering systems,etc.Traditional methods use pattern recognition,rule constraints,statistical learning and other methods to complete causal information extraction,which rely heavily on domain knowledge and feature engineering.With the explosive growth of unstructured text information in the network and the dramatic improvement of computer hardware performance,data-driven learning theory has developed rapidly,while traditional machine learning methods cannot fully utilize the underlying knowledge contained in big data.Therefore,Neural network technology has become popular.Neural networks provide a basic framework for building complex and accurate models.Researchers have developed various variants such as convolutional neural networks,recurrent neural networks,and long-term and short-term memory neural networks for different applications.Scene task.With the continuous development of deep neural network technology,more and more scholars have begun to use neural networks to build natural language models.At the same time,the representation of text in computers is becoming more and more mature,because the one-hot encoding cannot make the text representation contain semantic information.Researchers have developed a variety of word vectors to represent text,and have achieved significant achievement.In recent years,neural network models have been innovated and widely used in language tasks.At the same time,with the introduction and development of transfer learning and pre-training models,researchers are increasingly focusing on finding a good feature encoding scheme for text.Representation,and customize the corresponding model according to the specific task in order to get the best results.Therefore,how to properly encode text input,how to avoid the shortcomings of existing popular frameworks,and how to effectively train the network will affect the final performance of the model.We have conducted in-depth research on the above issues,and reused and analyzed cutting-edge technologies in the field of imaging.In this paper,we propose a neural causality extractor,named CISA(Causality Extraction based on Capsule Network with Self-Attentive Encoder),is used to detect whether the specified event pair contains causal information and to give out which of the event pairs is the cause and which is the cause.In order to better encode text information,a text feature encoder based on self-attention mechanism is proposed in CISA,therefore,the limitations of convolutional neural networks and recurrent neural networks are avoided.In addition,in order to improve the accuracy of the model,a capsule network mechanism is introduced in this paper to learn more instanced features in a sentence.We evaluated the model on a public data set,and the experimental results proved that our method can identify causality in the text with high accuracy.In addition,we set up control experiments to detail each part of the model.Evaluation.
Keywords/Search Tags:causality, self-attention, capsule network, word position embedding, dynamic routing
PDF Full Text Request
Related items