Font Size: a A A

Research On Multimodal Aspect-level Sentiment Analysis Based On Image And Text Fusion

Posted on:2024-07-05Degree:MasterType:Thesis
Country:ChinaCandidate:Y F FanFull Text:PDF
GTID:2568307094479264Subject:Electronic information
Abstract/Summary:PDF Full Text Request
In recent years,with the continuous popularity of smart phones and the rapid development of social media,the information published by users on the Internet has become more diversified in nature,among which the user data with images and texts as the main content is growing.Sentiment analysis of these multimodal data combined with images and texts is of great significance for enterprises to optimize their products and services,and for governments to do a good job in Internet public opinion monitoring.However,most of the previous research works on sentiment analysis are text-oriented,and there has been less research on sentiment analysis that combine images and texts.In the aspect-level multimodal sentiment analysis task,there is often a certain correlation between the modal data.How to more effectively capture the information correlation between the modal data and model the interaction between aspect features and modal features is the issue that needs to be analyzed and studied emphatically.In order to solve these problems,this dissertation builds an aspect-level multimodal sentiment analysis model by using deep learning technology,so as to more effectively identify the correlation between multimodal data and aspects,and enhance the effect of model sentiment classification.The main research contents of this dissertation are as follows:(1)To solve the problem that the aspect information is easy to be missing and the word pairs of aspect and text context are difficult to interact effectively,the dissertation proposes an aspect-level sentiment analysis model based on the combination of the context-preserving transformation and the attention-over-attention network.Firstly,the model uses bidirectional long short-term memory network to capture the hidden semantics of words in context and aspect,and the sentence and aspect features rich in semantic information are sent to the context-preserving transformation layer,so that the sentence can learn more abstract word-level features while preserving its semantic information.Then the sentence representation obtained through the context-preserving transformation layer is fed into the attention-over-attention network,which models aspect features and sentence features in a joint way and explicitly captures the interaction between aspects and sentences.Through this network,the model can jointly learn the representations of aspects and sentences and automatically focus on important parts of the sentence.(2)To solve the problem that the features of different modalities are difficult to be fused effectively,this dissertation proposes two multimodal feature fusion methods based on attention mechanism,namely,the feature fusion method based on multi-hop attention mechanism and the feature fusion method based on interactive attention mechanism.The former uses multi-layer attention layers to learn the interactive influence of cross-modal data and the self-influence of single-modal data.The latter uses the mean value of another modal feature as the query vector,and uses the interactive attention mechanism to capture the key information of this modal data,so that each modal data can interactively generate its own representation for sentiment classification.(3)Finally,the proposed model is evaluated on the public aspect-level sentiment analysis dataset Multi-ZOL.The joint features learned by the attention-over-attention network and the text and image features obtained by attention mechanism are concatenated and fed to Softmax for the final sentiment classification after a fully connected layer.Experimental results show that compared with the existing models,the proposed model has good performance on aspect-level sentiment analysis tasks.
Keywords/Search Tags:aspect-level multimodal sentiment analysis, deep learning, attention mechamism, feature fusion
PDF Full Text Request
Related items