Font Size: a A A

Research On Multimodal Sentiment Analysis Method Based On Deep Learning

Posted on:2022-02-05Degree:MasterType:Thesis
Country:ChinaCandidate:X JinFull Text:PDF
GTID:2518306752496914Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
With the rapid development of the Internet market and people's increasing demand for communication,online social media such as WeChat,Weibo,and Twitter are favored by more and more people.In addition,due to diversified forms of communication,emotional texts appearing online in the form of conversation are increasing rapidly,which express individual opinions and attitudes.Multimodal conversational scenes(text,audio or video information)are more representative of them,which involves all aspects of people's daily life.Multimodal emotion analysis task is an emerging research field,which aims to identify the speaker's emotion by combining different modal information.In recent years,it has gained more and more applications in the fields of public opinion analysis,intelligent dialogue and user portraits.This article is mainly based on the deep learning method to carry out the research of multimodal emotion analysis,and the experimental exploration is carried out from the perspectives of conversational characteristics and multimodal feature analysis.The specific research work is as follows:(1)A hierarchical multimodal Transformer with localness and speaker aware attention is proposed for multimodal emotion recognition in conversations task.Specifically,the algorithm uses hierarchical multimodal Transformer as the basic model architecture,and additionally uses a multi-task learning method with auxiliary tasks.Moreover,a localness and speaker aware attention mechanism is designed to capture more relevant contextual information.Extensive experimental evaluations show that our model is superior to the existing multimodal methods.(2)An adversarial multimodal representation learning based on BERT algorithm is proposed for multimodal emotion recognition in conversations task.First,we introduce the latest pretraining model BERT to solve the problem of ambiguity and ensure that the initial model parameters are close to the best position.Second,we add an adversarial learning module to map modal features from different sources to the modality-invariant embedding sapce.Experimental results have proved that the introduction of adversarial learning and the BERT module is reasonable and effective,and the best experimental results on the task of multimodal conversation emotion analysis have been obtained.
Keywords/Search Tags:social media, multimodal, emotion analysis, multi-party conversation, deep learning
PDF Full Text Request
Related items