With the continuous development of Internet technology,natural language processing technology plays a very important role in various fields,and named entity recognition technology,as the basic technology of many tasks in natural language processing,is also the focus of many scholars’ research.With the continuous development of machine learning and deep learning models,there have been relatively perfect achievements,but for individual tasks of named entity recognition,For example,the nesting problem and the Chinese named entity recognition problem,the current conventional models have unsatisfactory results.Due to the natural separator between Chinese and English,the decrease in final annotation accuracy caused by Chinese word segmentation errors has always been a research focus of many scholars.However,conventional RNN based models can only obtain word information that ends with it and word information from its previous state,which can lead to the loss of subsequent information.Similarly,named entity recognition has always been regarded as a sequence annotation problem.However,due to its nested structure,conventional models cannot recognize the nested structure in text.In response to the above issues,this paper proposes two named entity recognition tasks based on graph neural network structure to solve the above problems.1.To address the issue of context information loss in conventional RNN models,this paper constructs a graph neural network that obtains contextual information through node updates,improves entity annotation errors caused by missing subsequent information,and uses relative position encoding technology to extract missing pointing and positioning information;It obtains vocabulary information through dependency analysis to handle long-distance dependencies.It can also more effectively handle long distribution relationships and missing data in terms of direction and position by interacting with words in relative position encoding.2.To solve the nested structure problem of nested entities,this article uses Bi-GCN iteration to propagate information between potential entity start and end nodes to learn internal entities.The main implementation is to construct two graphs.The entity graph utilizes outer statement information,while the adjacency graph understands local contextual information.Then,the graph features are extracted and summarized.After obtaining the graph feature representation through Bi-GCN,the final nested entity score is obtained.Finally,the representation learned from the graph convolutional neural network is further fed back to the Chinese module to obtain better prediction results.This article conducts experimental verification on each module on both the Chinese dataset and the Chinese nested dataset.The experimental results on three NER datasets show that compared with other latest models,the model proposed in this paper has improved on all three datasets. |