Font Size: a A A

Byzantine-robust Federation Learning Based On Trust-synthesizing Mechanism

Posted on:2024-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:G C GengFull Text:PDF
GTID:2568307106499364Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Federated learning is a cross-platform machine learning method that achieves data exchange between different clients and a server.With the help of a server,federated learning enables multiple clients to learn machine learning models together.In particular,each client only needs to upload the local parameter update(gradient update or weight update)of the local model to the server without sharing their local training data with other clients or the server.Since federated learning is distributed,it is vulnerable to adversarial manipulation by malicious clients,which may be fake clients injected by Byzantine attackers or real clients controlled by attackers.Byzantine-robust federated learning aims to enable the server to learn a global model with high quality when the number of malicious clients is limited.The general idea of the existing approach is to require the server to statistically analyze all local parameter updates and remove suspicious outliers before aggregating them.However,these approaches do not have a root of trust that enables the server to accurately identify which local parameter updates are anomalous,which means that malicious clients can still compromise the global model by passing discreetly poisoned local parameter updates to the server.A new work recently proposed a federated learning approach to Byzantine robustness by introducing root of trust at the server(FLTrust,NDSS 2021),in which the server generates root of trust by training a tiny dataset(referred to as the root dataset).However,if the distribution of the root dataset deviates from the distribution of the overall local training dataset,the prediction accuracy of the global model obtained by training using the FLTrust method will be significantly reduced.In addition,this work also proposes an adaptive attack.After evaluating the attack in this thesis,it was found that it has significant time and computational overhead.This means that Byzantine attackers using this attack upload local parameter updates significantly later than other benign clients,which is easily detected by the server.Therefore,Byzantine attackers using this attack do not conform to real-world attacker models.To solve the above problems,this thesis first designs a more efficient adaptive attack that allows multiple malicious clients to execute this attack in parallel,so that the malicious clients upload local parameter updates to the server at a time similar to that of the other benign clients,ensuring both the strength of the attack and the efficiency of executing it.In this thesis,adaptive attack are applied to four cutting-edge federated learning methods.Based on the research observations on four different datasets,the attack effect of the adaptive attack in this thesis is very obvious and more efficient,which is more suitable for practical distributed attack scenarios.Second,this thesis also proposes a Byzantine-robust federated learning method based on trust-synthesizing mechanism,FLEST.With the premise that the server does not have access to the client’s local training datasets,the method can train a global model with high prediction accuracy even if the distribution of the root dataset deviates from that of the overall local training dataset and can effectively defend against various Byzantine attacks.In this thesis,we argue that the trust-based defense approach and the anomaly detection-based defense approach can complementarily solve their respective problems.Therefore,a new Byzantine-robust aggregation rule with a synthesized-trust score(STS)is designed in this thesis.Specifically,this thesis proposes a trust-synthesizing mechanism that can combine trust score(TS)and confidence score(CS)into STS by a dynamic trust ratio and use STS as the weight for aggregating local parameter updates.The experimental results show that FLEST can resist existing attacks and the adaptive attack described in this thesis even when the distribution of the root dataset is significantly different from the distribution of the overall local training dataset.For example,for the adaptive attack using the mnist-0.5 dataset,the prediction accuracy of the global model trained by FLEST is 41% higher than that of the FLTrust method when the bias probability is set to a very high 0.8.
Keywords/Search Tags:Byzantine robust, federated learning, trust-synthesizing mechanism, adaptive attack
PDF Full Text Request
Related items