| With the arrival of the era of big data and the continuous development of artificial intelligence,dynamic graph data has been widely used in real life and scientific research,and has great use and research value.Nowadays,graph data in many practical scenarios are dynamically generated,for example,dynamic graph data will be continuously extended when new users join and new relationships are established.Dynamic graph data plays an increasingly important role as the number of dynamic graph data increases in various domains.As an effective graph data processing tool that shows excellent performance on link prediction and node classification tasks,dynamic graph neural networks have received much attention in recent years.As the fields covered by graph neural networks become more and more extensive,privacy issues in graph neural networks have also received more attention,and there are privacy-preserving methods for graph neural networks being proposed,but the research on privacy-preserving methods for dynamic graph neural networks is still at a relatively preliminary stage.In the centralized scenario,dynamic graph neural networks dynamically update the information of nodes over time,and when the transformed information of nodes at different moments is captured by the attacker,the attacker can infer the attack on the edge information in the dynamic graph by combining certain background knowledge.In distributed scenarios,nodes’ feature information and label information also face serious privacy threats if they are directly provided to untrustworthy servers.Differential privacy is an effective privacy-preserving technique with strict mathematical definitions that does not require advance knowledge of the background knowledge possessed by the attacker,and has been widely used in various data distribution scenarios.Privacypreserving models for graph neural networks based on differential privacy are a hot research topic at the intersection of machine learning and data security,however,existing privacypreserving methods for graph neural networks mainly focus on static graphs,and no research has yet considered the privacy leakage problem in dynamic graph neural networks.In this paper,we analyze the privacy leakage problems of dynamic graph neural networks in centralized and distributed scenarios,and propose two differential privacy-preserving methods corresponding to centralized and distributed scenarios,respectively.The main research work of this paper is as follows.(1)A differential privacy-preserving method for dynamic graph neural networks with streams is proposed,which addresses the privacy leakage problem of dynamic graph neural networks in centralized scenarios.First,we analyze how an attacker can infer the edge privacy information in dynamic graphs based on the information passed during the training of dynamic graph neural networks in the centralized scenario and certain background knowledge.To address the privacy leakage problem of dynamic graph neural network in centralized scenario,this paper proposes a privacy protection method for dynamic graph neural network in centralized scenario,which uses the sliding window mechanism in streaming differential privacy to model the dynamic time,and perturbs the information passed in dynamic graph neural network training during the sliding window sliding process,so as to protect the edge information in the dynamic graph.In order to obtain a more reasonable privacy budget allocation,a privacy budget allocation strategy that conforms to the characteristics of dynamic graph is proposed,thus making the perturbation of the transmitted information more accurate and ensuring the utility of the model while ensuring the privacy protection effect.(2)A localized differential privacy protection method for dynamic graph neural networks is proposed,which addresses the privacy leakage problem of dynamic graph neural networks in distributed scenarios.Firstly,it is explained that during the training process of dynamic graph neural network in distributed scenario,the feature information and label information of nodes are directly sent to non-trusted servers,which will pose a serious threat to the privacy information of nodes.To address the privacy leakage problem of dynamic graph neural networks in distributed scenarios,this paper proposes a privacy protection method for dynamic graph neural networks in distributed scenarios.In order to protect the feature information of nodes,a multi-bit encoder is used to perturb the node feature information,and then the perturbed node feature information is subjected to rectification operation.To protect the label information of the nodes,a random response mechanism is used to flip the labels of the nodes with a certain probability,so that the label information of the nodes is protected with a certain amount of noise.In order to further reduce the impact of the imposed noise on the model utility and improve the usability of the data,a multi-layer node information aggregation method is used to make the perturbed node feature information and node label information closer to the real information,so that better model utility can be obtained while ensuring the node privacy.(3)The privacy analysis of the two algorithms proposed in this paper is carried out,and experiments are conducted on several publicly real datasets(Cora,Elliptic,DBLP,UCI,DNC and Epinions)to verify the utility of the proposed two algorithms respectively.The experimental results demonstrate that the proposed algorithms in this paper guarantee the security of dynamic graph data while the models achieve good performance in downstream tasks. |