Font Size: a A A

Research On Defense Against Membership Inference Attacks In Federated Learning

Posted on:2022-02-19Degree:MasterType:Thesis
Country:ChinaCandidate:X K WuFull Text:PDF
GTID:2518306572477784Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of artificial intelligence,machine learning,as the most basic implementation method,has been widely used in various fields.In order to solve the privacy security problem and the "data island" problem of the traditional machine learning,federated learning has been proposed.However,the latest works show that federated learning still faces the data privacy risks.Therefore,in this thesis,we study the issues of privacy protection and some privacy-preserving techniques in federated learning,and focus on a kind of attacks,membership inference attack,which is used to judge whether a given data is a training data of the learning model,and analyze its current defense schemes and their deficiencies.In this thesis,we would propose two defense schemes against the membership inference attacks in federated learning.Aiming at the problem that federated learning will leak the user data privacy of the clients,we design a privacy-preserving algorithm based on local differential privacy.In order to protect the user data of the clients in federated learning,we use data perturbation with local differential privacy into the user data before the clients collect them.Aiming at the problem that the deployment model of federated learning will leak the data privacy of the clients,we design a privacy protection algorithm based on maximizing the gradient angle deviations.Before the deployment phase of federated learning,we perturb the local model output prediction probability vectors of all clients' data while keeping the first category labels of data unchanged and maximizing the their gradient angle deviations,and then these vectors are used to train the local models with gradient protection for the following final global model aggregation.In this thesis,several real-world datasets and some basic networks are used to build a federated learning environment.Through a large number of experiments,the performances of these defense schemes in this thesis are tested to defend the membership inference attack.When the perturbation probability is 0.1 in our protection scheme based on local differential privacy,the model prediction accuracy of multi-classification task is 75%,where it reduces the accuracy of the membership inference attack from 61.3% to 52%.In addition,our scheme can provide the same model prediction performance as the traditional differential privacy protection scheme,while reducing the accuracy of the membership inference attack by more than 10%.Our scheme based on maximizing the gradient angle deviations is similar to the traditional differential privacy protection scheme and the TOP1 scheme in the performance against the membership inference attack with the black box setting,which reduces its accuracy by 6%-16%.Our scheme is better than the traditional differential privacy protection scheme in the membership inference attack with the white box setting,which reduces its accuracy by 5%-24%,but the effect on the model prediction performance is much lower than the latter.The experimental results show that our two defense schemes can effectively resist the membership inference attacks while maintaining high model prediction performance.
Keywords/Search Tags:Federated learning, Membership inference attack, Defense scheme, Local differential privacy, Gradient protection
PDF Full Text Request
Related items