Font Size: a A A

Research Based On Pre-trained Language Models And Knowledge Enhancement For Aspect-based Sentiment Analysis

Posted on:2022-08-04Degree:MasterType:Thesis
Country:ChinaCandidate:X M PanFull Text:PDF
GTID:2518306569481284Subject:Computer technology
Abstract/Summary:PDF Full Text Request
The goal of aspect-based sentiment analysis(ABSA)is to predict the sentiment polarity given a target text and a specific aspect in the text.Current research on ABSA mainly includes using recurrent neural network to represent the aspect term as well as its context and using pre-trained language models to fine-tune on downstream tasks.However,the current research has the following problems and shortcomings: Firstly,some studies based on pre-trained language models focus only on learning the semantic representation of the target text,ignoring the relationship between the target text,aspect term,and sentiment polarity.Secondly,In the studies of knowledge enhancement based on pre-trained language model Bert,most of the studies directly use in-domain datasets on the two pre-training tasks of Bert,without considering the demands of different downstream tasks on the pre-training objective functions and lacking the analysis of the impact of the weight of the objective functions on the downstream tasks;finally,the supervised training set of some downstream tasks are insufficient,which easily leads to inadequate model training.To address the above problems,this paper conducts research on aspect-based sentiment analysis based on pre-trained language models and knowledge enhancement methods: to address the problems of ignoring the relationship between target text,aspect term and sentiment polarity and the lack of supervised training data for downstream tasks,three ways of constructing auxiliary sentences are proposed to use the information of the aspect term and sentiment polarity to transform the multivariate sentiment classification problem for target text into a binary relationship between sequence pairs.On the one hand,it enables the pre-trained language model to better learn the sentiment feature representation related to aspect terms,and on the other hand,it effectively increases the number of training samples.To address the problem of not considering the influence of pre-training objective functions on downstream tasks in knowledge enhancement,two knowledge enhancement approaches are proposed to re-pretrain the pre-trained language model Bert using in-domain datasets and task-related datasets,respectively,and to investigate the influence of pre-training objective functions on downstream tasks by setting different pre-training objective function ratios.The final experimental results demonstrated the effectiveness of the constructed auxiliary sentence approaches and the knowledge enhancement approaches proposed in this paper: on the Laptop and Restaurant datasets of Semeval2014 task4,the three pre-trained language models Bert,XLNet,and Roberta combined with the auxiliary sentences all achieved good results,proving that the constructed auxiliary sentences are generalizable,with Roberta achieving accuracies of 89.32% and 91.46% on the two datasets in subtask 2of task4-aspect terms polarity,respectively,while in subtask 4-aspect category polarity,Roberta achieved accuracies of 94.31% and 94.65% on the Restaurant four way and three way settings respectively.Roberta achieved new state-of-the-art on the two datasets.The experimental results of knowledge enhancement proved the effectiveness of the two enhancement methods on the one hand,and illustrated that different downstream tasks focus on different pre-training objective functions,and setting the appropriate ratio of pre-training objectives for knowledge enhancement contributes to the model performance improvement on the other hand.
Keywords/Search Tags:Deep Learning, Natural Language Processing, Aspect-based Sentiment Analysis, Pre-trained Language Models, Knowledge Enhancement
PDF Full Text Request
Related items