Named entity recognition is a basic task in natural language processing.Its task is to locate each named entity from a wide range of unstructured text and determine its categories,such as person name,place name,and organization name.It plays an important role in downstream tasks such as entity relationship extraction,question-answering system,and machine translation,so it has important research significance.However,due to the diversity of natural language expression,the phenomenon of nesting often occurs in the text,which usually shows that a named entity of one category is nested in another named entity.Aiming at the problem of insufficient acquisition of word-internal information and text context semantic information in existing research methods of nested named entity recognition,this dissertation proposes two models,and the main research contents are as follows:(1)A nested named entity recognition model based on interactive feature fusion.Given the lack of information acquisition due to the failure to consider the interdependencies of different features within a word,the model first conducts interactive learning between the word-level embedding vector and the character-level embedding vector,mining the internal information and hidden semantic information of the word;Then,Bi LSTM network is used to obtain the sentence-level vectors of the two words,and interactive learning is carried out to solve the problem of insufficient information acquisition inside the words;Then the two sentence feature vectors are spliced and sent into the multi-head attention mechanism to obtain the global feature information;Finally,the candidate interval is identified using the full connection layer,and the interval is divided into fine granularity and entity categories are identified.The experimental results on the GENIA data set show that the recall rate R-value and F1 value are 72.4%and 71.2%,respectively.The experimental results verify the effectiveness of the proposed model.(2)A nested named entity recognition model based on two-level feature complementarity.To solve the problem that the semantic information of text context is not enough when the encoder is used to generate a text vector,a nested named entity recognition model based on two-level feature complementarity is proposed.In this model,the word-level embedding vector and character-level embedding vector of words are first sent into the Bi LSTM network to obtain the context feature information;Then the low-level features of two sentence-level vectors are complementary,and the two vectors capture the useful semantic information inside each other respectively;Then,the multi-head attention mechanism is introduced to strengthen the attention of important features,and the two sentence-level vectors are complemented at a high level.The complementary sentence-level feature vectors are spliced to get the final sentence expression vector,mining out richer text semantic information,and solving the problem of insufficient text information acquisition;Finally,the sentence expression vector enters the entity word recognition and fine-grained division module to locate and identify the type of nested named entities.The recall rate R-value of the nested named entity recognition model based on two-level feature complementarity reached 77.2%,and the F1 value reached72.7%,which achieved good results.Figure[19]table[9]reference[66]... |