The vigorous development of artificial intelligence not only brings about technological innovation,but also makes people come into contact with a large amount of information every day.How to analyze and utilize these data and feed back relevant information to users has become a big problem.A large part of these data is unstructured data in the form of text.Natural language processing technology,which is effective in processing textual information,can rely on machine learning and neural network technologies to further understand and analyze textual information,so as to extract valuable information.After nearly 20 years of rapid development and precipitation,the research of natural language processing has become increasingly mature,and information extraction technology is in a fundamental and critical position in this field,and the research related to information extraction has long become a research hotspot in this field.Under such a background,this paper makes a special study on relation extraction task.With the help of the rise of deep learning,this paper adopts the pre-training language model,which has been the focus of research in the past two years,to fine-tune the model on the data set of relational extraction task.In this paper,the framework of comparative learning is borrowed,and the data sets extracted with supervised and small sample relationships are used respectively to generate the corresponding comparative samples,and the cross-entropy loss is used for training.This paper innovatively proposes a method to fuse the context and entity information,and integrates the surface features and semantic features learned from the model to improve the performance of the relational extraction model.In addition,the strategy of using the weight parameters of the pre-training model is proposed in this paper,which better utilizes the high-level semantic information in the model and further enhances the effect of the relationship extraction model.In this paper,a large number of ablation experiments were conducted on the proposed method,and compared with the traditional CNN model.The experimental results show that the full learning of the context information of relational statements and the feature information of entity location and entity type can improve the effect of the relational extraction model to a certain extent,and the F1 value of the supervisory task is increased by 7.87%.The F1 value of the small sample task is increased by 7.52%.The importance of parameters of each layer of the pre-training model was analyzed,and the outputs of each hidden layer of BERT model were selected as the relationship representation for the experiment.The results show that the high-level semantic information features of the pre-training model have an important impact on the relational extraction task,and the adaptive use of the weight parameters of the pretraining model proposed in this paper can promote the improvement of the relational extraction effect. |