Font Size: a A A

Research On Multimodal Sentiment Classification Technology For Social Media

Posted on:2022-01-29Degree:MasterType:Thesis
Country:ChinaCandidate:X Y HuFull Text:PDF
GTID:2518306572450984Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the rise of social media and the development of mobile Internet,more and more multimodal data appear on social media.These multimodal data contain rich sentimental information,and the extraction and utilization of these sentimental information is of great significance in business decision-making,public opinion monitoring and stock market prediction.In addition,multimodal data connects natural language processing and computer vision.The analysis of multi-modal data has so far been a huge challenge.Multimodal sentiment analysis not only has great application value,but also has great scientific value.Therefore,More and more research organizations and industries have begun to pay attention to the field of multimodal sentiment analysis.Focusing on the topic of multi-modal sentiment classification in social media,this paper proposes three different multi-modal sentiment classification technologies for data with different granularity and from different perspectives.The coarse-grained multimodal sentiment classification technology based on attention mechanism is mainly aimed at coarse-grained image-text multimodal sentiment classification task.This paper proposes a multi-task learning model based on attention mechanism,which uses a specially designed attention mechanism,a gating mechanism and multi-task learning mechanism.Through comparative experiments on public datasets,it is proved that our attention-based multi-tasking learning mechanism is effective for coarse-grained multimodal sentiment classification.The fine-grained multimodal sentiment classification technique based on mixed attention is mainly aimed at the fine-grained image-text multimodal sentiment classification task..Finegrained multimodal sentiment classification tasks are different from coarse-grained multimodal sentiment classification tasks.It is to classify the sentiment tendency of the fine-grained Aspect Term in the text.We have put forward two models altogether.The first fine-grained multi-modal sentiment classification model based on co-attention,introduces co-attention between images and text in the multi-head attention module,and at the same time explicitly introduces coattention between text and Aspect Term in the upper layer.On the basis of the first model,we propose a fine-grained multi-modal sentiment classification model based on mixed attention,which integrates more attention information in the multi-head attention module,including SelfAttention between text and image,co-attention between image and text,and co-attention between image and Aspect Term.The validity of our method is proved by comparative experiments on two open data sets.Multi-modal sentiment classification technology based on pre-training uses a pre-training method,which has achieved great success in the text field.Based on this,we propose a multimodal sentiment classification model based on pre-training.The model introduces image information in the input layer and uses three pre-training tasks specially designed for sentiment classification tasks to conduct self-supervised training on large-scale unlabeled multimodal corpus.It not only realizes the alignment of image and text in semantic space,but also enhances the model's ability to obtain sentiment information through image and text.Through comparative experiments on the data set,the effectiveness of our model is proved.
Keywords/Search Tags:Sentiment analysis, multi-modal machine learning, multi-task learning, pre-training
PDF Full Text Request
Related items