Font Size: a A A

Poisoning And Privacy Inferring Attack And Defense Methods Towards Federated Learning

Posted on:2022-09-06Degree:MasterType:Thesis
Country:ChinaCandidate:Y G RenFull Text:PDF
GTID:2518306563959969Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Federated Learning is a new distributed learning framework,which can solve the problem of training data locally without leaking it,but it also faces some unprecedented security and privacy threats.For example,poisoning attack in traditional machine learning can be easily implemented in federated learning.Since local data is not visible to the outside world,malicious participants can easily tamper with the data to carry out poisoning attack.In addition,the server and the local clients communicate through the model parameters,so the malicious participants can directly get the parameters.By analyzing these parameters,it can still make a leakage of private information to a certain extent.Therefore,this paper aims to study the threats of poisoning attack and privacy leakage towards federated learning.By analyzing and studying the common and classic attacks in these two threats,the corresponding new detection and defense methods are proposed.It provides a new way to deal with the threats of poisoning and privacy in federated learning.The main research results are as follows:(1)Class mean of fully connected layer detection method is proposed for label flipping attack.The label flipping attack is implemented in data poisoning,which is a common attack in federated learning scenario.And studies have shown that even when the number of attackers is very small,it can do a lot of damage to the federated learning system.Aiming at the attack,the process of computing the classification probability at the fully connected layer of the neural network and the significance of the weight of each class to the classification are analyzed by explaining the model structure method.The experimental results show that the average weight of each class can reflect the learning of the model.So a new detection method called class mean of fully connected layer is proposed.The results show that the label flipping attack can be detected by comparing the distribution difference between the attacker and the honest clients in the class mean of fully connected layer.In addition,the class mean of fully connected layer can be used to detect the poisoning attack that adds noise to the data sample.(2)A parameter compression defense method against GAN attack is proposed.The GAN-based privacy inferring attack is realized in the aspect of privacy leakage.The GAN attack is an attack against the specific process of federated learning.The attacker pretends to be an honest participant in the process of communication to steal the privacy data information of other participants.And he finally reconstructs other participant's privacy data.It is a very harmful attack to federated learning.In order to defend against this attack,the method of literature research is used to study how to truncate parameter updates uploaded by participants to prevent the leakage of private information.A defense method of parameter compression is proposed.Parameter compression protects private information by truncating the updated parameters to reduce the sharing of some information.It prevents the attacker from recovering the private data of the victim while maintains the accuracy of the global model.Parameter compression method can defend against GAN attack effectively.
Keywords/Search Tags:federated learning, poisoning attack, label flipping attack, GAN privacy inferring attack, defense methods
PDF Full Text Request
Related items