Font Size: a A A

Research On Sentiment Analysis Method Based On Multi-modal Feature Fusion

Posted on:2023-02-19Degree:MasterType:Thesis
Country:ChinaCandidate:Y XueFull Text:PDF
GTID:2558306905490994Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the development and wide application of information technology,more and more users share their views on things and express their feelings about things on personal social media in more complex media forms besides text,such as pictures,audios or videos.Compared with sentiment analysis tasks based on single modal data,sentiment analysis tasks based on multi-modal data have some new challenges and more considerations.Multi-modal data consists of multiple single-modal data and the interaction relationship between each modal.Therefore,based on the sentiment analysis of multi-modal data,it is crucial to effectively analyze the importance of different modal data information and the related information between different modal data.Based on this background,how to fully and effectively capture the relevant information between different positions and different dimensions within the same modal data,obtain the characteristics of the single-modal data information to the greatest extent,and how to deal with the contradictory information between the single-modal data and effectively fusing the information characteristics of each single-modal data are the current main problem.In view of the above background,this paper proposes a multi-modal data feature extraction method(Self-Attention-based Bi-LSTM and GATE,SA_LG).Firstly,aiming at the problem of how to fully capture the correlation between different positions and different dimensions of information within the same modal,this paper proposes a time-series feature extraction method based on the Bi-LSTM model.Input into the Bi-LSTM model to obtain the information characteristics associated with the internal context of each modal;Secondly,for how to make full use of the information association between different modalities and effectively deal with the problem of contradictory information between different modalities,this paper proposes a method of fusion of target modal data information features and supplementary modal data information features based on self-attention mechanism;At last,aiming at the problem of how to control the output information of different modal data and how to effectively fuse the characteristics of multi-modal data,this paper proposes a multi-modal data feature fusion method based on the gating mechanism,by controlling the output of each target modal data,finally complete the feature fusion of each single-modal data information.Finally,the multi-modal fusion features are input to the classification module to complete the emotional decision-making and analysis.This paper conducts comparative experiments with the current more popular feature fusion model based on multi-modal data,and conducts ablation experiments of corresponding modules based on the model proposed in this paper.The experimental results show that the sentiment analysis method based on multi-modal feature fusion proposed in this paper the accuracy rate,precision rate and F1 value and other indicators have been improved to a certain extent,which shows the usability and effectiveness of the method proposed in this paper.
Keywords/Search Tags:Sentiment Analysis, Bi-LSTM, Self-attention Mechanism, Feature Fusion
PDF Full Text Request
Related items