Font Size: a A A

Research On Defense Methods Against Inference And Poisoning Attacks In Federated Learning

Posted on:2024-09-13Degree:MasterType:Thesis
Country:ChinaCandidate:L ChenFull Text:PDF
GTID:2568307079459984Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
Federated learning(FL)is an emerging distributed machine learning paradigm.In FL,participants exchange local models to jointly train an accurate global model,thereby protecting local data privacy and improving communication efficiency.The privacy computing goal of FL is to realize “data is available but not visible”.However,FL still suffers from two security issues,i.e.,privacy leakage and Byzantine fault.For example,a semihonest server can launch privacy attacks to infer private information from local training data,while a Byzantine adversary can also corrupt the aggregated model by sending malicious model parameters.Concerning the above problems,the existing privacy protection and poisoning defense schemes still have deficiencies.The two usually conflict with each other and cannot be deployed at the same time.In other words,it is difficult to achieve a dynamic balance between data privacy and model robustness.In this regard,this thesis proposes a privacy protection scheme and designs a poisoning defense algorithm for FL.The two are compatible with each other and can be deployed at the same time.To be more specific,the main work of this thesis is as follows:(1)To alleviate the privacy leakage problem in FL and defend against membership inference attacks(MIA)from the semi-honest server,this thesis proposes a privacy protection scheme based on the ESA model,which is called Re SHUFFLE.Re SHUFFLE suppresses the tendency of overfitting in the training process through the weakened LDP(Local Differential Privacy)noise,which reduces the accuracy of the global MIA by 6.12-14.29%.To make up for the privacy defects that may exist in Weak-LDP,Re SHUFFLE introduces reconstructing shuffling to enhance privacy and anonymity,which also reduces the accuracy of the single MIA by 10.29-19.54%.Experimental results demonstrate that Re SHUFFLE can protect the confidentiality and privacy of local training data while maintaining the original accuracy of the model.(2)To solve the Byzantine failure problem in FL and resist untargeted model poisoning attacks from malicious clients,this thesis proposes a cluster-based Byzantine-robust aggregation algorithm called RCA.RCA uses a clustering algorithm based on sign statistics to distinguish abnormal parameters uploaded by Byzantine clients,and aggregates the remaining parameters with weights to reduce the impact of the abnormal parameters on the global model.Under Byzantine clients’ inner product manipulation attacks,the aggregation accuracy of RCA is 20.33-57.28% higher than the optimal accuracy of other robust aggregation algorithms.Under Byzantine clients’ distance attacks,the aggregation accuracy of RCA is 12.69-25.97% higher than others.In addition,RCA can effectively resist poisoning attacks under various percentages of Byzantine clients and has better aggregation results when the training data is not independent and identically distributed.RCA can always maintain the accuracy and robustness of the global model.
Keywords/Search Tags:Federated Learning, Data Privacy, Inference Attack, Byzantine Robustness, Poisoning Attack
PDF Full Text Request
Related items