Font Size: a A A

Fusing Texts And Images To Recognize The Sentiment Of Online Users Based On Deep Learning

Posted on:2021-04-07Degree:MasterType:Thesis
Country:ChinaCandidate:T FanFull Text:PDF
GTID:2517306512488354Subject:Information Science
Abstract/Summary:PDF Full Text Request
The sentiment of online users has an extreme importance on the evolution of online public opinion events.In new era,public opinion in the Internet is not no longer in a form of lonely texts,transferring to texts attached with images and short videos.However,the existing research about the sentiment recognition of online users only focus on the texts,lacking the research about images and texts combined with images.Based on the shortcoming of current research,we did research about sentiment recognition and combined texts and corresponding images to recognize the sentiment of online users in online public opinion events.We developed the multimodal sentiment recognition model combing the texts and corresponding images in online public events and presented text sentiment recognition model and visual sentiment recognition model respectively.Image-text fused sentiment recognition model is composed of text sentiment recognition model,image sentiment recognition model and image-text fusion method.In text sentiment recognition,we used word2vec to represent the texts and the word embeddings,as the input,are sent to BiLSTMs to complete the sentiment recognition.In visual sentiment recognition,we employed pre-trained VGG16 model as our base model and we adjusted the structure of model according to our datasets.To improve the performance of model,we released different layer to fine-tune model.In multimodal fusion sentiment recognition,we combined forementioned text recognition model and visual sentiment model to build the multimodal sentiment recognition model,trained in a manner of end to end.Additionally,we set up several baseline models,which are word2vec combined with SVM,BERT combined with BiLSTMs,un-finetuned CNNs,fine-tuned CNNs with different released layers and multimodal fusion sentiment recognition model employing different fusion strategies.In visual sentiment recognition,we visualized the outputs of different layer of visual sentiment recognition model,exploring the learning process of model.In intermediate fusion,we conducted the quantitative analysis to find whether the improvement of unimodal sentiment recognition model can boost the performance of multimodal sentiment recognition.In decision-level fusion,we analyzed the contribution of texts and images to multimodal sentiment recognition.In the verification of generalization of the proposed model and the superiority of multimodal fusion,the model was tested on twitter online public opinion events.Experiment results showed that the proposed multimodal sentiment recognition model was superior to the baseline models.In qualitive analysis,we found that the improvement of unimodal sentiment recognition did improve that of multimodal sentiment recognition.Additionally,texts in online public events contained more sentiment information compared to images and sentimental weights of texts in modal fusion was higher.Further,we employed proposed model in twitter online public opinion events,containing texts and images.Experiments results showed that the presented model had certain generalized performance and the performance of multimodal fusion was superior to that of unimodality.Especially,in specific online public event,such as “yellow vest”,the F1 value of proposed model was beyond 90%.
Keywords/Search Tags:Online public opinion events, Deep learning, Multimodal fusion, Sentiment recognition, BiLSTM
PDF Full Text Request
Related items