Font Size: a A A

Relation Extraction Model Based On Attention Mechanism And Graph Neural Network

Posted on:2022-12-26Degree:MasterType:Thesis
Country:ChinaCandidate:S Z ChenFull Text:PDF
GTID:2518306776492794Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
The relation extraction task aims to extract triples from unstructured text,which is an important subtask of information extraction and the basic work for building knowledge graphs,question-answering systems,and other natural language processing tasks,which has important research value.Relation extraction can be divided into sentence-level relation extraction and document-level relation extraction according to different texts.In the sentence-level relation extraction task,the goal is to identify entities and classify relations.In the document-level relation extraction task,the input text is more complex.The model cannot take into account the recognition of entities,and its goal is to classify entities.Information redundancy,overlapping problem and prior inference problem all affect the performance of sentence-level relation extraction models.The sentence-level relation extraction models ignore the relations between entities in different sentences,so it is impossible to simply replace the input text of the sentence-level extraction model with the document composed of multiple sentences.In order to address the above problems,this paper proposes sentence-level joint relation extraction models based on attention mechanism and contrastive triples and a document-level relation extraction model based on graph convolutional networks.The main work is as follows:1.The joint extraction model based on the attention mechanism is proposed to solve information redundancy and the overlapping problem.To address information redundancy,this paper decomposes the sentence-level relation extraction task into two subtasks:(1)identifying head entities;(2)identifying tail entities and relation labels.To address the overlapping problem,the model uses the attention mechanism to predict the number of triples,so that entities can participate in multiple predictions.It solves the overlapping problem and avoids the performance limitation caused by manual threshold screening of relations.The model achieves F1 values of 90.8% and 91.9% on the public datasets NYT and Web NLG,and obtains better results in experiments with overlapping triples,proving the effectiveness of the model.2.This paper proposes a relation extraction model based on contrastive triples to solve the prior inference problem.The model uses the attention matrix to generate the relation embeddings.The relation embedding combines the entity embeddings and the full-text feature embedding into a context-based triple expression so that the model can verify the correctness of the triples according to the context information.The model achieves better results than other benchmark models on the NYT and Web NLG datasets.3.Based on the research on sentence-level relation extraction,a document-level relation extraction model based on graph convolutional network is proposed to solve the ignored problem that triples can be composed of entities in different sentences.The model combines features of graph and sequence to classify relations.Graph convolutional network is used to extract graph-based entity features and path dependencies between entities.The contrastive learning module extracts sequencebased context information.At the same time,entity structure information is introduced to enrich the feature information of entities and improves the accuracy of relation classification.The model achieves Ing F1 score of 59.47% on the Doc RED dataset,surpassing other benchmark models,proving the model's effectiveness on documentlevel extraction tasks.
Keywords/Search Tags:Relation Extraction Task, Joint Extraction Model, Attention Mechanism, Graph Neural Network, Contrastive Learning
PDF Full Text Request
Related items