| In recent years,the flourishing development of online social platforms has not only facilitated people’s communication but also led to the proliferation of online rumors.To address the negative social impact of online rumors,it is urgent to establish an efficient and accurate rumor detection model.However,high-quality rumor detection models require a large amount of training data-driven,and currently,manually annotated classified rumor data cannot match the prevalence of online rumors.In addition,the training data of a single machine architecture has limitations and requires comprehensive data from multiple parties to support the further improvement of rumor detection technology.However,the social data of online users is related to their privacy information,and high-quality classified data belongs to private assets for enterprises,making it difficult for data between different institutions to be utilized,forming a "data island".To address the above issues,this paper proposes a federated learning based rumor detection technology and proposes a security optimization scheme for the training process of the federated rumor detection model.The specific research content is as follows:Firstly,this paper proposes a rumor detection technique that combines federated learning paradigm with an improved graph attention network model to address the problem of "data silos" among various social media platforms in the field of rumor detection.This paper moves the traditional meaning of federated learning ’client layer’ up as a whole,and uses a social network platform with annotated data as the client of federated learning.Design an improved graph attention network rumor detection model locally on the client side,which can filter out irrelevant nodes in the input rumor event graph and enhance the features of the source posts to more accurately extract rumor features.In addition,this paper combines the graph attention network model with the federated learning paradigm to train a federated rumor detection model(FL-GARD)with stronger comprehensive inference ability from multiple dimensions such as model mechanism and data sources.Based on the method proposed in this paper,each participant can achieve effective multi-party cooperative machine learning in the context of privacy restrictions,ultimately improving the overall performance of their local models.Through simulation experiments on real rumor datasets,it has been verified that the proposed method can effectively improve the performance of participants’ local models and make the classification performance of the model approach the ideal scenario where data can be shared under privacy constraints.Furthermore,this article proposes a lightweight local level privacy protection mechanism to address the privacy security attacks that may occur during the training process of federated learning models.This mechanism provides disturbance protection for model parameters uploaded by participants participating in FL-GARD model training.At the same time,a mixed perturbation mechanism noise based on mask matrix was designed to address the significant impact of conventional parameter perturbation mechanisms on model performance.This method achieves adaptive allocation of noise signal positions by sensing the positions of important model parameters and utilizing dynamically adjusted mask matrices,balancing the data privacy of participants and model performance.Through simulation experiments,it has been verified that the hybrid perturbation mechanism proposed in this paper can effectively reduce the impact of noise on the performance of rumor detection models while effectively protecting data privacy.It is a feasible security enhancement scheme. |