| As a new distributed learning paradigm with privacy-preserving expectations,federated learning has attracted widespread attention.Nevertheless,the invisibility of federated learning to clients’ local data makes federated learning systems vulnerable to poisoning attacks.Although many existing studies are focused on enhancing the robustness of the federated learning global model against poisoning attacks,there are still a variety of targeted and stealthy poisoning attacks that can bypass various defense methods based on different defense mechanisms deployed by the federated learning system.This poses a severe threat to the availability,integrity,and security of models in federated learning.Therefore,this thesis aims at studying the defense methods,which can effectively improve the defense effect of the federated learning system and enhance the robustness of the global model training,against poisoning attacks in federated learning.The main research results are as follows:(1)Aiming at the difficulty of detecting malicious parameters in federated learning,Layer Evaluation is proposed.Layer Evaluation is used to accurately detect and locate malicious parameters associated with poisoning attacks.By analyzing the reason why the existing state-of-the-art poisoning attacks in federated learning are stealthy,the methods of enhancing the stealthiness of poisoning attacks in federated learning are summarized.Combining the explanatory idea of deep learning model with the underlying mechanism of poisoning attacks,Layer Evaluation calculates the matrix composed of the sum of the squares of the distances among the parameters of each layer of the local model submitted by the client.The reason why Layer Evaluation has better effect in detection is analyzed through the formula of distance calculation,and the experiment proves that Layer Evaluation can more accurately detect and locate malicious model parameters that are related to poisoning attacks and have strong stealthiness.(2)Aiming at the difficulty of recovering damaged model in federated learning,Layer Clean,a defense method for layer parameter cleaning is proposed.This method detects malicious layer parameters with abnormal distribution through Layer Evaluation,and implements median-based adaptive clipping and adaptive noise perturbation to resist poisoning attacks and speed up the model recovery.In this thesis,both the model cleaning effect and anti-poisoning effect of this method are verified by experiments under two situations of the one-shot attack and the multi-shots attack.The experimental results prove that Layer Clean can speed up the model recovery and effectively defend against various untargeted poisoning attacks and enhance the robustness of the global model to backdoor attacks.(3)Aiming at the difficulty of security aggregation in federated learning,five aggregation methods based on Layer Evaluation are proposed,including LD-Statistic、LD-Difference、LD-Kmeans、LD-Distribution and LD-Density.These defense methods are based on Layer Evaluation to detect abnormal parameters in client layer parameters of local models in different ways,and filter and aggregate local model parameters.In this thesis,under the threat model that the attacker can steal the client model and control40% of the clients to collusively implement a stealthy poisoning attack,compared six aggregation methods for 4 stealthy untargeted poisoning attacks and 2 stealthy backdoor attacks,through a large number of experiments,the defense effect of the five aggregation methods our proposed in this thesis is verified.The experimental results show that the five aggregation methods proposed in this thesis generally have better defense effects on stealthy poisoning attacks than the existing robust aggregation algorithms,and enhance the robustness of the federated learning global model against poisoning attacks. |