Font Size: a A A

Design And Implementation Of Personalized Federated Learning Alogorithm Based On Multipe Aggregation Servers

Posted on:2024-02-25Degree:MasterType:Thesis
Country:ChinaCandidate:H L LiuFull Text:PDF
GTID:2568306944957579Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Personalized Federated Learning is an optimization approach of federated learning,which implements personalized processing on the client side to make the client model better adapted to the local data.This approach solves the problem of poor model performance caused by differences in local data distribution in federated learning.There are many ways to implement Personalized Federated Learning,among which Personalized Federated Learning based on Knowledge Distillation has received widespread attention because it does not need to restrict the client model structure.The aggregation server completes information transmission by distributing the prediction probabilities of the client models on the public dataset.However,existing research has not considered the impact of the number of clients on communication with the aggregation server.In actual IoT scenarios,a large number of clients will lead to a large public dataset,thereby causing the aggregation server to transmit a large amount of data in a single communication.In each iteration,the aggregation server needs to send its predictions on the public dataset to all clients,resulting in huge communication pressure on the aggregation server as the number of clients increases.In view of the above problems,this paper proposes a Personalized Federated Learning algorithm based on multiple aggregation servers,which can solve the communication congestion problem of aggregation server in large-scale client scenarios.By dividing clients into different aggregation servers and establishing corresponding public datasets,and obtaining information from other aggregation servers through parameter interaction,communication pressure can be reduced on the aggregation server side.In the process of knowledge distillation,model performance can be further improved by optimizing model prediction.This article conducts experiments on multiple open source datasets to compare the accuracy and communication time of the proposed algorithm with other algorithms.Compared to the personalized federated learning algorithm based on Knowledge Distillation on a single aggregation server,our algorithm only needs 0.04 times the communication time at the minimum and can improve the accuracy by up to 2%under the conditions of 385 clients with 9 aggregation nodes set in this article.Secondly,based on the proposed algorithm,a Personalized Federated Learning system based on multi-aggregation servers is designed and implemented.The system mainly includes multiple modules such as data acquisition,public dataset creation,model training,parameter interaction between aggregation servers,task viewing,and task management.The data acquisition module is mainly used for users to upload private data,the public dataset creation module is used to create public datasets of aggregation servers and all clients under them,and the model training module is used to complete aggregation server and client knowledge distillation training or personalized training,and the parameter interaction module between aggregation servers completes the parameter exchange between aggregation servers.The task viewing module displays the client model performance through the Vue framework combined with the element-UI component library.The task management module implements functions such as modifying task information.
Keywords/Search Tags:Federated Learning, Personalized Federated Learning, Knowledge Distillation, Internet of Things
PDF Full Text Request
Related items