Font Size: a A A

Research On Rumor Detection Based On Multimodality

Posted on:2022-12-30Degree:MasterType:Thesis
Country:ChinaCandidate:J X ChenFull Text:PDF
GTID:2518306779995999Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the development of society and the progress of the times,Internet technology has provided great convenience to people's lives.Especially in recent years,a large number of social media platforms have risen.It not only facilitates people's communication,but also changes the way people get information and disseminate it.On the social media platform,people can freely create,publish and extract hot spot information.Many people also use its timeliness to spread and spread rumors,gain the attention of netizens,attract people's attention and seek benefits.Rumors are defined as messages without facts,which often mislead readers.It will not only harm individuals,society or even the country.In some cases,it will have a great negative impact on social and public events,even it can become a tool for some people to fight politically.Compared with the traditional text rumor information,multi-modal rumor information with pictures,audio and video has more powerful influence.It is easier to attract readers,guide their emotions and paralyze their thoughts.How to discover and break down rumors has become a hot issue in the country and even the world.In recent years,the multi-modal rumor detection technology has developed rapidly,from using traditional machine learning methods to deep learning methods.However,most of the existing work is difficult to solve the two difficulties of multi-modal rumor detection.How to effectively establish different modal data connections is the first challenge.Secondly,training with known rumor events to detect unknown rumor events tends to lead the model to focus on capturing the characteristics of the rumor event.The training data for new events is small or even absent,which makes it difficult for the model to recognize the rumors.In order to solve the two problems,this thesis designs a multi-modal rumor detection network,and the specific research is as follows:1.A multi-modal fusion network is designed to integrate text and image data from social media to detect rumors.Given the multi-modal features,the network uses a self-attentive fusion mechanism to assign weights to each mode for feature level fusion.Considering that textual features are more distinguishable than visual features,the textual features are connected to the fused features in a residual manner.In addition,the network introduces latent topic memory to store semantic information about rumor and non-rumor events,which helps identify upcoming posts.2.Based on the first network model,this thesis designs a new multi-modal rumor detection network to detect rumors on social media.The network combines the complementary information of textual features and visual features through the multi-head self-attentive fusion mechanism,assigns the weight to different modalities,and performs feature fusion from multiple space.In addition,the network uses the memory network with contrary latent topics to store the semantic information of true and false patterns of rumors,which is helpful to identify new rumors to be released.A large number of experiments on three public datasets show that the multi-modal rumor detection method proposed in this thesis is better than the most advanced method.
Keywords/Search Tags:rumor detection, memory network, attention mechanism, multi-modal fusion
PDF Full Text Request
Related items