Font Size: a A A

Multi-modal Data Sentiment Analysis System Based On Combined LSTM

Posted on:2022-12-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y X HeFull Text:PDF
GTID:2518306782452394Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Powered by technology of information and communication,sentiment analysis has turned into a popular research field in information science.Untill now,the research of sentiment analysis has been mature in the field of single modality,but with the increase of data forms,how to fuse emotion in multiple modal data has become a problem,which needs to be considered in the existing research.Existing sentiment analysis studies based on multi-modal data ignore the interdependence and connection between the contextual semantics of each utterances segment in the video,resulting in limitations in the accuracy of analysis.At the same time,in the process of multi-modal data fusion,the relevant algorithms ignore an important problem,that is,the difference in the importance of each modality in sentiment classification and prediction.Based on the problems which were mentioned above,this thesis has conducted research about the multi-modal data sentiment analysis methods,and the research contents mainly include:(1)This thesis proposes a method of sentiment analysis based on combined LSTM for multi-modal data(MDSA-CL).By introducing bi-directional long and short memory artificial neural network,the problem of ignoring contextual semantics in the extraction and fusion of multi-modal data is solved.To be specific,firstly,an appropriate method is selected to deal with unimodal features to eliminate redundant information and obtain accurate unimodal input.Secondly,each unimodal feature is used for context-dependent information interaction within the modality.Thirdly,by constructing the bimodal information interaction layer,the dynamic interaction of the three modes is realized in pairs.A three-modal information interaction layer is constructed so as to capture the dynamic interaction information between multiple modalities.Finally,all the above features are fused and expressed to obtain the emotion classification prediction.Compared with all baseline methods,the experimental result expresses,the performance of MDSA-CL method is improved.(2)This thesis proposes a method of sentiment analysis based on combined LSTMAttention Mechanism(MDSA-CLA)to reduce the generation and interference of nonimportant modal redundant information.Specifically,firstly,unimodal features are extracted separately.Secondly,a unimodal information extraction layer was constructed in the form of combined LSTM to obtain the internal modal interaction information.Thirdly,the three modalities are divided into three groups in pairs,and the dynamic information of internal and intermodal interactions is processed by the attention mechanism respectively.Fourthly,the attention mechanism is used to efficiently allocate the attention weight of each modality,so that its features can be effectively fused.Finally,the full connection and softmax layer were used to obtain the final sentiment classification prediction.the experimental result expresses,the performance of MDSA-CLA method has better performance than all baseline methods.(3)Based on the method of MDSA-CLA,a system has been designed and implemented by this thesis.Finally,the experiment verifies that various functions of the system have been greatly improved compared with the unimodal system,and shows the actual effect of each interaction.
Keywords/Search Tags:Multi-modal data, Sentiment analysis, Feature fusion, LSTM, Attention mechanism
PDF Full Text Request
Related items