| Social media platforms are facing a new and evolving information-level cyber threat.Because the open nature of social platforms allows for a large and constant flow of unverified information,rumors can emerge unexpectedly and spread quickly.If you manually screen the huge amount of information on social platforms,it will consume a huge amount of manpower.Therefore,technologies that can automatically capture rumors from a large amount of social speech and cut off their dissemination in a timely manner have attracted the attention of researchers.At present,most of the existing research often assumes that rumors under different event domains have the same training and test distribution,but because rumors of different events have different event domains,they usually cannot cope with the ever-changing social network environment.It cannot effectively curb the spread of rumors in a timely manner.After each incident domain occurs,data needs to be collected and the deployment model trained again,requiring huge maintenance costs.Therefore,it is the key to the rumor detection task to timely identify and continuously monitor the rumors of emergencies that often appear on social platforms.This paper regards this key issue as the task of continuous detection of rumors in emergencies.The main research contents are as follows:(1)In the face of rumor detection of emergencies,it is necessary to respond quickly and in a timely manner.Since rumors are highly domain-specific,and there is a lack of training samples in the early stages of emergencies,it is a test of the generalization ability of the deep learning model.Therefore,this paper uses hints to learn better small-sample generalization capabilities,and combines the proposed multiple Forward knowledge transfer strategy to enhance generalization to deal with incident rumors.(2)Faced with a large number of event domains that emerge daily in social media,models are required to have continuous learning capabilities.Continuous learning of deep learning models usually faces two major challenges: catastrophic forgetting and incremental learning.This paper proposes solutions to these two challenges respectively.Using the flexible domain transfer capability of the hint language model,the parameters are divided into task-specific knowledge(fine-tuned soft hint parameters)and general knowledge(frozen parameters).Using different soft hint initializations for different events effectively avoids catastrophic forgetting and makes the model have high parameter validity.Incremental learning means that the model needs to have the ability to accumulate knowledge during the continuous learning process,and reversely transfer knowledge to old event tasks.In order to achieve this ability,this paper proposes two backward knowledge transfer strategies of Memory Replay and Task-conditioned prompt-wise hypernetwork,TPHNet,which can ensure that the model does not produce forgetting while strengthening the accumulation ability and improving the detection accuracy.In order to verify the efficiency of the above-mentioned method,this paper collects Chinese and English datasets containing 14 domain events and compares the abovementioned method with existing related research.A large number of simulation results show that the proposed method not only improves the accuracy of rumor detection.Accuracy and alleviate the scene problem of continuous detection of emergency rumors. |