Font Size: a A A

Research On Aspect-level Sentiment Analysis And Stance Detection In Social Media

Posted on:2023-03-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:B LiangFull Text:PDF
GTID:1528307376985109Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Due to the rapid development of mobile Internet and social media,people express a large number of views,opinions,experiences,sentiments,and stances on social media platforms every day,resulting in large-scale social media sentiment data.This makes the analysis and understanding of sentiment and stance expression in social media data become an important topic in the field of affective computing.Recently,a large number of research interests are shifting from the coarse-grained text-level or sentence-level sentiment analysis to more fine-grained aspect-level sentiment analysis and more macroscopic stance detection.The current research studies on fine-grained sentiment analysis for social media mainly focused on aspect-level sentiment extraction based on the attention mechanism,which are relatively inadequate to model and learn the complex sentiment dependencies for the aspects from the context by combining external knowledge.At the same time,most of the existing aspect-level multimodal sentiment analysis work only focused on the information fusion between specific entities and multimodal features,which lacks the ability to extract the important aspect-level sentiment features in different modalities by introducing external sentiment knowledge.On the other hand,most of the existing textual stance detection studies focus on learning the stance feature information of the target in the context by means of feature engineering or attention mechanism,which lacks distinguishing the different stance expression roles of words towards different targets.In addition,the current research work on multimodal stance detection mainly focused on the independent modeling of different modalities,which lacks the ability to fuse information of different modalities based on specific targets.Therefore,this thesis proposes a comprehensive framework of aspect-level textual sentiment analysis,aspect-level multimodal sentiment analysis,textual stance detection,and multimodal stance detection.To address the above four questions,this thesis focuses on the following research:For aspect-level sentiment analysis,the current research methods of aspect term sentiment analysis are difficult to simultaneously learn the contextual sentiment dependencies of the words within a specific aspect term,as well as the sentiment dependencies between different aspects in the context.This thesis proposes an interactive graph convolutional networks model for aspect term sentiment analysis.To be specific,based on the dependency tree of sentences and the external sentiment knowledge,the proposed model constructs the aspect-specific sentiment dependency graph and inter-aspect sentiment dependency graph respectively by introducing the sentiment dependencies between contextual words and the words in aspect term and the sentiment relations between the specific aspect and other aspects.Then,through the interactive graph convolution operation,the contextual sentiment information of the specific aspect term and the sentiment relationship of different aspect terms are modeled simultaneously to better learn the aspect-level sentiment information.Based on it,in view of the deficiency that the existing aspect category sentiment analysis methods are difficult to model the complex sentiment dependencies between the contexts and the aspect categories,this thesis derives aspect-aware words of aspect categories by topic model to build the aspect-specific sentiment dependency graph and inter-aspect sentiment dependency graph of the relatively abstract aspect categories,and then proposes an aspect-aware interactive graph convolutional networks.Based on the aspect-aware words in the context,this method can model the contextual sentiment dependency information of specific aspect categories and the sentiment relationships between different aspect categories in the same sentence simultaneously by means of the interactive graph convolution operation,so as to learn the sentiment information of the aspect categories in the context.Experimental results on four benchmark datasets show that the performance of the proposed models is significantly better than the existing models in aspect term sentiment analysis and aspect category sentiment analysis,respectively.For aspect-level multimodal sentiment analysis,the existing research methods are difficult to model the complex sentiment relationships of different modalities based on specific aspects,this thesis proposes an aspect-specific multimodal graph model to fuse the feature information of different modalities in the light of the specific aspects through graph construction.Based on the pre-trained models of different modalities,this method can learn the complex aspect-level sentiment relationships between different modalities according to graph convolution operation and the attention information of specific aspects.Based on it,this thesis further introduces the external knowledge of text modality and the object detection of image modality,and proposes a knowledge-fused multimodal graph model for aspect-level multimodal sentiment analysis.Specifically,this model can effectively fuse the important sentiment information of text modality and the important visual regions of image modality for specific aspects,thus improving the learning ability of aspect-level multimodal sentiment information.Experimental results on two aspect-level multimodal sentiment analysis datasets show that the proposed aspect-specific multimodal graph model achieves better performance than the existing methods.At the same time,the proposed knowledge-fused multimodal graph model can further improve the performance of aspect-level multimodal sentiment analysis.For textual stance detection,to deal with the problem that the existing target-specific stance detection methods are difficult to learn the different stance expression roles that a word plays with respect to a specific target,this thesis proposes a target-adaptive graph convolutional networks model for target-specific stance detection.This method calculates the stance expression weights adaptive to the specific target for the words,and then constructs the target-adaptive stance expression relationship graph for each sentence,so as to effectively leverage the stance information of the specific target in the context.Based on it,in view of the existing zero-shot stance detection methods that are difficult to learn the relationship and difference of stance features in the training process,this thesis proposes a target-adaptive graph contrastive learning model for the zero-shot stance detection task.Specifically,a novel hierarchical contrastive learning strategy is devised to integrate or distinguish the feature information at the stance level and the target level in the representation latent space,so as to better learn the relevant known target stance features for the unknown ones,thus improving the performance of stance detection of unknown targets.Experimental results on two public benchmark datasets show that the target-adaptive convolutional graph networks model proposed in this thesis achieves better performance than the comparison models in target-specific stance detection.At the same time,in the zero-shot stance detection scenario,the proposed target-adaptive graph contrastive learning model also reaches the highest known performance.For multimodal stance detection,in view of the lack of public datasets and the difficulty in fusing the stance feature information of different modalities based on specific targets in the existing multimodal stance detection research,this thesis designs and constructs two new multimodal stance detection datasets.Further,this thesis revisits the multimodal stance detection problem,in which the multimodal stance detection problem is decomposed into two sub-problems: target-specific multimodal stance detection and zero-shot multimodal stance detection.To solve the target-specific multimodal stance detection,this thesis proposes a pretext task-based target-oriented multimodal graph convolutional networks model based on the pretext task and graph model.Based on it,to solve the zero-shot multimodal stance detection,this thesis proposes a pretext task-based target-oriented multimodal graph contrastive learning model.Among them,the proposed pretext task-based target-oriented multimodal graph convolutional networks model captures the textual description in the image based on the optical character recognition(OCR)technology of the image modality.Further,by leveraging the pretext task,the proposed method can capture the correlation between text modality,image modality,and the OCR results.Based on the correlation learned by the pretext task,this method introduces the specific target information in the process of constructing a multimodal graph,so as to effectively learn the multimodal stance features for the specific target in the graph convolution operation.Then,for zero-shot multimodal stance detection,this thesis learns the types of stance expressions of the instances by the pretext task.Further,the types of stance expressions are employed to devise a hierarchical contrastive learning strategy.Based on the hierarchical contrastive learning strategy,the proposed pretext task-based target-oriented multimodal graph contrastive learning model can distinguish the multimodal stance features of different stance expression types and stance polarities in the latent space,so as to better learn the related stance features of the known targets for the unseen ones to extract the multimodal stance information of the unknown targets.Experimental results on the two new multimodal stance detection datasets show that,compared with the textual stance detection task and the zero-shot stance detection task,the target-specific multimodal stance detection task and the zero-shot multimodal stance detection task are more challenging.At the same time,the proposed pretext task-based target-oriented multimodal graph convolutional networks model achieves better performance than the baseline models in the target-specific multimodal stance detection task.The proposed pretext task-based target-oriented multimodal graph contrastive learning model also reaches the highest known performance in the zero-shot multimodal stance detection task.
Keywords/Search Tags:Aspect-Level Sentiment Analysis, Aspect-Level Multimodal Sentiment Analysis, Stance Detection, Multimodal Stance Detection
PDF Full Text Request
Related items