Font Size: a A A

Research On Client Server Optimization Method Of Federated Learning

Posted on:2022-12-15Degree:MasterType:Thesis
Country:ChinaCandidate:T H HuangFull Text:PDF
GTID:2518306779489164Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Federated learning is a special kind of distributed machine learning that can jointly train a shared global model without directly exposing the data of all parties.Federated learning completes user data sharing by exchanging the local model parameter information of the participants and jointly learning the federated model.Aiming at the problem of poor global training performance caused by the server-side average aggregation algorithm in the federated distillation framework,this paper proposes a probability distribution-based weighted average aggregation algorithm FD-PDWAA,which uses KL divergence to quantify the probability of client uploading Based on this,a new server aggregation algorithm is designed to reduce the impact of low-quality clients or malicious adversaries on global training and improve global training performance by assigning different aggregation weights to the probability distribution uploaded by clients.Aiming at the problem that the performance of the client model is degraded due to the heterogeneity of the client data,this paper proposes the update difference degree to measure the deviation degree of the client update.A client-adaptive local iteration cycle number optimization strategy ASE is proposed,which determines the client-side local iteration cycle number by calculating the Cosine similarity between the client's local gradient update and the global gradient update estimated by the RMSProp optimization algorithm.The selection of the number of adaptive local iteration cycles on different clients is implemented,which avoids the problem of global model performance degradation caused by client local training underfitting or overfitting.Finally,this paper designs and implements relevant experiments to verify the rationality and effectiveness of the method proposed in this paper.The experimental results show that under the MNIST and CIFAR-10 datasets,the probability distribution-based weighted average aggregation algorithm FD-PDWAA improves the accuracy of the two test sets by 2.1% and2.2%,respectively,compared with the baseline average aggregation algorithm.And has stronger anti-attack and anti-interference ability.At the same time,after using the ASE strategy proposed in this paper,the accuracy of the baseline algorithm on the two test sets has been improved to a certain extent,and it can reduce the amount of local computing on the client and improve the training efficiency of the client.
Keywords/Search Tags:Federated learning, Federated distillation, Weighted aggregation, Algorithm improvement, Performance optimization
PDF Full Text Request
Related items