Font Size: a A A

Research On The Algorithm Against Poisoning Attacks In Federated Learning Scenarios

Posted on:2023-12-24Degree:MasterType:Thesis
Country:ChinaCandidate:Y K DengFull Text:PDF
GTID:2568307172958359Subject:Computer technology
Abstract/Summary:PDF Full Text Request
As data privacy and security issues get more attention,traditional centralized training is unable to meet application requirements.Federated learning and its related technology offer a useful solution for this problem.However,in federated learning the server cannot fully control the behavior of the clients.Some model parameters sent by certain clients may disrupt the training of a global model.Attackers can attack a global model by changing the uploaded model parameters(Model Poisoning Attacks)or polluting the client’s local data(Data Poisoning Attacks).These attacks are called poisoning attacks.How to enhance the robustness of the global model is one of the important research directions of federated learning.A defense algorithm RFL-MA(Robust Federated Learning against Model Poisoning Attacks)based on the committee mechanism is proposed against model poisoning attacks.A defense algorithm RFL-DA(Robust Federated Learning against Data Poisoning Attacks)based on the update magnitude of the fully connected layer is proposed against data poisoning attacks.RFL-MA introduces the idea of the committee,and assigns the verification tasks to the members of the committee.Considering the verification results given by the committee,the suspicious model parameters are excluded from the aggregation process.RFL-MA dynamically updates the members of the committee after each round of global aggregation,timely cleans up malicious members and encourages clients with highquality datasets to join the committee.RFL-DA fixes the shallow network parameters of the pre-training model,allows clients to update the fully connected layer of the pre-training model locally,judges the suspicious degree of the clients according to the update magnitude of the fully connected layer,reduces the aggregation weights of the suspicious clients,so as to reduce the impact of the suspicious clients.Finally,a federated learning prototype system against poisoning attacks is built,which implements RFL-MA and RFL-DA algorithm.The empirical study shows that RFL-MA and RFL-DA perform well in different data distribution scenarios.Especially for sign flipping attacks(an implementation of model poisoning attacks),the accuracy of RFL-MA on MNIST,FMNIST and CIFAR-10 dataset is improved by 13%,7% and 9.6% respectively compared with traditional defense algorithms.For label flipping attacks(an implementation of data poisoning attacks),the attack success rate of RFL-DA is close to Fools Gold algorithm,but RFL-DA achieves more accuracy than Fools Gold,with an average increase of 10.5%,3% and 1.7% respectively on MNIST,FMNIST and CIFAR-10 dataset,which shows that RFL-DA maintains a low attack success rate without losing the high accuracy against data poisoning attacks.
Keywords/Search Tags:Federated Learning, Poisoning Attacks, Robustness, Defense Algorithm
PDF Full Text Request
Related items