Font Size: a A A

Multi-Task Based Position-aware Study Of Emotion-cause Pair Extraction

Posted on:2022-11-17Degree:MasterType:Thesis
Country:ChinaCandidate:C H HuFull Text:PDF
GTID:2518306746481354Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Emotion-Cause Pair Extraction(ECPE)has a profound significance for emotion monitoring,in the face of emergencies,want to know the people's emotions and the reasons for generating emotions,as a basis to understand the direction of public opinion.Filmmakers want to know the emotions of moviegoers and their emotional reasons,so as to create movies that movies that viewers like more.In the ECPE task,it is necessary not only to consider the interaction between the emotion and the cause when extracting the emotion and the cause clause,but also to pay attention to the impact of the imbalance problem in the data set on the accuracy of the extraction result.Aiming at the above problems,this study designs the end-to-end emotion-cause pair extraction model ECPE-P(Emotion-Cause Pair Extraction-Position-Aware,ECPE-P)of position-aware information.The model adopts the end-to-end extraction method,which effectively avoids the influence of the first step of the two-step extraction method on the accuracy of the pairing result of the second step.In order to reduce the impact of data imbalance,this study uses a cost-sensitive cross-entropy loss function.In the study of the interaction relationship between the emotion and the cause clause,the text application level attention network captures the dependency between the input sequences,and at the same time introduces the emotional location information embedded in the cause extraction to increase the interaction between the emotion and the cause clause.Since this study is a dataset of Chinese text,the data pre-training is based on BERT-wwm of the Chinese text published by HIT and i FLYTEK.The feasibility and effectiveness of the proposed method are proved by experiments.
Keywords/Search Tags:ECPE, Position-Aware, Multi-task learning, BERT
PDF Full Text Request
Related items