Font Size: a A A

Byzantine-Robust And Privacy-Protective Federated Learning Training Algorithm

Posted on:2024-05-24Degree:MasterType:Thesis
Country:ChinaCandidate:J ZengFull Text:PDF
GTID:2568307079470994Subject:Electronic information
Abstract/Summary:PDF Full Text Request
As a distributed machine learning approach,federated learning enables multiple participants to jointly train a global model.In this case,each participant only needs to train the model locally and then send the model parameters to the server for aggregation,thus avoiding direct exchange of training data.Although federated learning has advantages such as high efficiency and privacy,recent research has shown that it still faces many security threats,the most representative of which include privacy inference attacks and Byzantine attacks.Privacy inference attacks are where a server can recover private data from a participant’s gradient information,while Byzantine attacks,on the other hand,are where a participant can upload malicious gradients to affect the accuracy of the global model.In recent years,many defence schemes have been proposed to address each of these two security problems,however,few of them are able to defend against both attacks at the same time.In this thesis,I propose a federated learning training framework that can resist Byzantine attacks and protect privacy in terms of the security of federated learning,and design two specific schemes based on this idea.Specifically,the main work of this paper is as follows:This thesis proposes a federated learning training scheme that is resistant to Byzantine attacks and hides the gradients.The scheme uses CKKS homomorphic encryption to encrypt and upload the participant’s gradients,effectively preventing the server from directly accessing the participant’s gradient information.Also in the scheme the server holds a portion of the dataset and uses the trained gradients from this dataset to calculate the cosine similarity with the participant’s gradients in a ciphertext environment and uses the result to perform global weighted aggregation,achieving the effect of resisting Byzantine attacks.Compared to existing work,this scheme offers significant advantages in computational efficiency in federated learning,as well as achieving higher accuracy in defending against Byzantine attacks.This thesis proposes federal learning training schemes that are resistant to Byzantine attacks and privacy-enhancing.This scheme builds on the previous one by comparing the ciphertext data of the server and the user with the help of a garbled circuits technique,thus avoiding the server from directly obtaining the cosine similarity result and achieving the screening of malicious gradients.Moreover,this scheme designs a threshold setting algorithm for each round of the server,which is able to improve the identification of malicious ladders as much as possible while reducing the probability of misclassifying normal ladders.This solution provides a more comprehensive guarantee of participant privacy than the previous solution,with little loss in overhead and accuracy.
Keywords/Search Tags:Federated Learning, Byzantine Robustness, Homomorphic Encryption, Privacy-Preserving
PDF Full Text Request
Related items