Font Size: a A A

Research On Byzantine Attack Defense Algorithm In Federated Learning Scenario

Posted on:2021-07-16Degree:MasterType:Thesis
Country:ChinaCandidate:W WanFull Text:PDF
GTID:2518306575953829Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of deep learning,neural network models have become increasingly complex,training model on a single dataset is often difficult to meet the demand.Collecting data from all the users and uploading them to the server for centralized training,however,would result in the leakage of private information.To address this problem,federated learning has emerged in recent years.It aims to protect users' private data while achieving collaborative learning over distributed data to build a global model.Nevertheless,federated learning is highly susceptible to attacks from malicious users since the server cannot directly access the user's local training data to examine it.Based on this,a novel attack method called weight attack was proposed first.It can bypass existing model detection methods by constructing low-quality local models,which prevents the aggregated model from converging properly in federated learning and reduces the accuracy of the global model.Through comparative experiments with the existing Byzantine attack methods,label flipping attack and sign flipping attack,the results show that even under the defense of the simple Multi-Krum algorithm and FABA algorithm,the effects of label flipping attack and sign flipping attack are minimal.In contrast,the attack effect of the weight attack is very obvious,and the accuracy of the global model can still be greatly reduced under the protection of these two defense algorithms.Secondly,a defense method based on model anomaly score was proposed,which can accurately identify low-quality model and effectively resist weight attack without obtaining user data to protect privacy.Experimental results on two typical datasets,MNIST and CIFAR-10,showed that weight attacks are effective against existing defense strategies,can significantly affect the convergence of the aggregated model and reduce the accuracy of the model,while our proposed defense method can effectively mitigate the weight attack.Federated learning is based on the premise of protecting data privacy to realize the joint training of distributed data.In this process,Byzantine attacks are inevitable.Only by proposing corresponding defense strategies for various attacks to reduce or even eliminate the influence of malicious models can federated learning better exert its win-win effect of privacy protection and distributed training.
Keywords/Search Tags:Privacy protection, Federated learning, Weight attack, Anomaly detection, Neural network
PDF Full Text Request
Related items