Font Size: a A A

Research On Privacy Protection Method Based On Graph Convolutional Neural Network

Posted on:2021-01-25Degree:MasterType:Thesis
Country:ChinaCandidate:Y P TianFull Text:PDF
GTID:2518306041461414Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
In recent years,machine learning as a services have attracted the attention of many data researchers due to their convenient and practical advantages,and ehanged people's inherent understanding of the difficulty and high cost of machine learning models.Its data-driven service feature makes a large amount of user data be used to train various service models,among which there is no lack of sensitive information for users.However,untrusted machine learning cloud services cannot provide strong privacy security for the online users and model training members.There are privacy threats such as malicious inference of user privacy information,model inversion attacks and member inference attacks,which will cause serious damage to online users and service providers.Therefore,to protect the privacy of machine learning models and user information has become an important issue that needs to be addressed in the development of machine learning as a services.Graph Convolutional Neural Network,as an emerging machine learning algorithm,can effectively disseminate graph information to effectively extract features of topological networks.Its strong learning ability in topological graph structure data has made it popular in social network related research fields,among which research on attribute and link inferring has been promoted and developed,but at the same time,untrusted cloud service platforms have also increased the risk of malicious inferring on social user privacy.Aiming at the user data privacy threats faced by the graph convolutional neural network prediction model in social network scenarios,this thesis innovatively proposes a private attribute reasoning method based on graph convolutional neural network to achieve social user private attribute inferring.Then,based on the proposed privacy inference model,this thesis combines the differential privacy technology to provide corresponding privacy preserving mechanisms,and proposes a differential privacy protection method based on graph convolutional neural networks.The main work of this thesis is as follows:(1)Analyze the common privacy threats of machine learning as a service from the perspective of users and machine learning models,and further introduce the working mechanisms of privacy threat methods such as private attribute inference,model inversion attack and member inference attack.And classifies and introduces the current related work of differential privacy protection for machine learning.(2)This thesis proposed an attribute inference method based on graph convolutional neural networks PAI-GCN.Our method describes in detail the privacy threats to user data caused by graph convolutional neural network prediction models in social network scenarios.Based on the homogeneity of social networks,we use the attribute features and social relationships exposed by social users to construct a graph convolutional neural network classification model through semi-supervised learning,with the purpose of inferring the private attribute categories hidden by the target user.Finally,the robustness and accuracy of the method were evaluated through real social network datasets Soc-Pokec.The experimental results show that the method can correctly infer the private attributes of 80.8%of social users.(3)Combined with differential privacy technology,a differential privacy preserving method based on Graph Convolutional Neural Network(DP-GCN)is proposed.This method uses the differential privacy noise adding mechanism to inject training model parameters with noise that satisfied with the Laplacian distribution,and uses perturbed weight parameters to classify the unpredicted data.The purpose is to make it difficult for the adversary to infer the privacy information of the training members based on the prediction results even if the published model parameters are obtained,thereby achieving the privacy protection of user data and model parameters.At the same time,this thesis evaluates the privacy protection and data utility of the method through theoretical proof and experimental analysis.The results show that the DP-GCN method can achieve differential data privacy and high data utility.This thesis provides an effective privacy protection solution for Graph Convolutional Neural Network service models,which makes up for the limitations of graph convolutional neural network applications due to privacy threats.It has a positive role in promoting the privacy of graph convolutional neural networks.
Keywords/Search Tags:graph convolutional neural networks, privacy protection, differential privacy, privacy inference, social networks
PDF Full Text Request
Related items