Font Size: a A A

Evaluating Local Differential Privacy Under Membership Inference Attacks In Federated Learning

Posted on:2022-11-19Degree:MasterType:Thesis
Country:ChinaCandidate:H ZhangFull Text:PDF
GTID:2518306779464064Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Federated learning has received a lot of attention as a distributed model training method with the feature of protecting the privacy of training data.However,it has been shown that since local model updates inevitably contain information about the training data,they can be exploited by attackers to spy on the characteristics of the local training data.Locally differential privacy mechanisms are a class of data privacy-preserving methods that have received widespread attention.However,there is a lack of systematic research on how efficiently local differential privacy based on randomized responses can be applied to federation learning,how adding randomized responsebased data perturbation affects the performance of federation learning(convergence and training accuracy),and how well the perturbed federation learning system resists inference attacks on training data.To this end,we evaluate the impact of local differential privacy on the loss of model accuracy and member inference attacks by implementing three classical localized differential privacy mechanisms in a two-class federal learning framework based on weight and gradient aggregation updates,and experimentally.Finally,the relationship between privacy budget and model convergence speed is analyzed,and an adaptive privacy budget allocation mechanism is designed.Specifically,the work in this paper is as follows.First,a federation learning framework is implemented,which contains generic functional modules such as training data partitioning,client selection,model perturbation,and two types of federation learning based on model weights and gradient updates.The neural network models are trained and their performance(convergence and model accuracy)is analyzed using four publicly available datasets,namely MNIST,Fashion MNIST,CIFAR-10 and Purchase-100,with two types of model updates.Second,for the characteristics of the open join of the federal learning species client,the active/passive,global/local,and conspiracy scenarios of adversary attack capabilities under whitebox conditions are comprehensively considered.Global passive attack(server attacker),local active attack(client attacker),and global active attack(server and client conspiracy)are implemented.The attack performance of the above attacks against the standard federation learning process is verified.Then,a random response mechanism,an optimized unary encoding mechanism,and a piecewise mechanism that satisfy the local differential privacy property are designed and implemented in a federated learning environment.The effectiveness of the above local differential privacy mechanisms against the three types of white-box membership inference attacks and their impact on model performance are investigated.Specifically,we investigate the selection of the optimal number of cells in the implementation of the localized differential privacy mechanism;we analyze the relationship between privacy budget,number of clients,training data distribution,model accuracy,and attack success rate;and we explore the relationship between attack mode,attack timing,attack duration,and attack success rate.Finally,it is proposed that since different layers of the neural network model contain different amounts of information about the features of the training data.And the amount of information about the training data contained in layers increases as the neural network model converges during the federal learning process.Therefore,an adaptive privacy budget allocation mechanism is designed to assign separate privacy budgets to different layers of the model and dynamically assign privacy budgets according to the degree of model convergence.Experiments show that adaptive privacy budget allocation mechanism can effectively improve the convergence speed of the neural network model in federated learning without increasing the attack success rate(i.e.,constant privacy loss).
Keywords/Search Tags:federated learning, local differential privacy, membership inference attack, privacy leakage ratio, adaptive allocation of privacy budget
PDF Full Text Request
Related items