| In the wave of artificial intelligence in the past decade,perceptual intelligence such as vision and hearing has made great progress under artificial intelligence technology represented by deep learning,and it is gradually advancing towards cognitive intelligence that empowers machines to reason and think.As an important way of knowledge representation in the era of big data,knowledge graph is an important cornerstone for realizing knowledge-centered cognitive intelligence.Entity semantic relationship extraction technology is the basis for the construction and application of large-scale knowledge graphs.Its task is to extract the semantic relationships between entities from unstructured text,which can provide data support for the construction of knowledge-centered cognitive intelligence systems.,model basis and core algorithms.Entity semantic relation extraction is also a classic task in the field of information extraction,which has been widely concerned by academia and industry.In recent years,with the blessing of deep neural networks,semantic relation extraction technology has achieved good results on multiple public datasets.However,there is a huge gap between actual scenarios and ideal settings in traditional public datasets.Existing methods are only suitable for simple application scenarios that consider a single relation instance in isolation,involve a small number of entities,and can express complete factual knowledge with only a single sentence of text.When it comes to practical application scenarios with complex contexts and complex relationships,it encounters severe challenges at the three levels of modeling entity-relationship dependencies,complex entity structures,and cross-sentence entity relationships.This paper mainly conducts in-depth research on relation extraction in practical scenarios,analyzes the current problems and proposes improvements.The main research work of this paper is summarized as follows:1.Most of the existing methods directly use related entity pairs to classify the relationship,and consider the relationship between two entities in isolation,which is difficult to deal with the semantic dependencies and constraints between the relationships in practical scenarios.Aiming at this problem,this paper proposes a relation extraction model that integrates pretrained language model and label-dependent knowledge.Using the graph convolutional neural network to model the semantic dependencies between relational labels,combined with the powerful feature encoding ability of the pre-trained language model BERT,comprehensively consider all relational facts in a sentence,thereby improving the extraction performance of the model.Experimental results show that,compared with the baseline methods,the performance of this model is significantly improved on the sentence-level relation extraction task.2.Existing methods focus on sentence-level semantic relation extraction and are limited to entity relations in single-sentence texts,which are inevitably limited in practice.In practical application scenarios,most entity semantic relations are described by whole chapters or paragraphs,and usually involve multiple entities and complex text structures.Therefore,it is necessary to advance relation extraction to the document level.Aiming at this problem,this paper proposes a document-level relation extraction model based on adaptive semantic path awareness,which builds a fine-grained document graph and uses a graph neural network to model multi-granularity semantic information within a document.In order to better capture the effective information of entities on the document graph,the model controls the message propagation algorithm from both breadth and depth,and filters and aggregates document-level information by learning the adaptive perception path of node message propagation.The experimental results show that the model has a good improvement in extracting intra-sentence and cross-sentence entity relations.3.Existing document-level relation extraction models all focus on obtaining documentlevel entity representations,and then predict the relationship between entity pairs through two static entity representations.However,the representation of the same entity in different entity pairs should be closely related to the entity pair it is in,and the semantic relationship can be better expressed by a dynamically generated unique entity pair representation.To address this problem,this paper proposes two new techniques,context-guided mention integration and entity-pair inference,centered on maintaining entity-pair representations,which utilize the information within and between entity-pairs to encode and update entity-pair representations,respectively.Context-guided mention integration leverages entity pair-sensitive context to guide the target entity’s integration of internal coreference mentions.Inter-entity-pair reasoning constructs an isomorphic entity-pair graph,and uses a graph neural network to comprehensively consider the internal connections of all entity pairs in a document.Experimental results show that this method can significantly improve the performance of document-level relation extraction. |