Font Size: a A A

Optimization Method For Communication Efficiency In Federated Learning

Posted on:2024-06-20Degree:MasterType:Thesis
Country:ChinaCandidate:Y ZhangFull Text:PDF
GTID:2568307067973359Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Mobile intelligent devices’ popularity and rapid performance improvement have stimulated researcher’s interest in deploying machine learning applications on devices.As a distributed learning technology,federated learning uses the local data held by participants to train models,and participants transmit the trained models to the central server and then aggregate and share the models with all participants.Since deep neural networks usually have millions of parameters,the data sent between the central server and the participants will result in a colossal communication overhead.At the same time,the data held by mobile smart devices is usually non-independent and non-uniform distribution,which will also affect the convergence speed of the model,resulting in increased training rounds and communication overhead.In response to the above problems,this thesis studies the method of reducing communication overhead in federated learning and proposes a data-balanced client selection method and an automatic pruning method for the global model.The main contribution of this thesis is as follows:(1)A client selection method is proposed for selecting client combinations with balanced data distribution in each training round.Firstly,a data perturbation method based on local differential privacy is designed to perturb the local data classification of the client.This approach ensures that honest and curious central servers do not capture accurate data classifications.Secondly,a data-balanced client selection method is proposed to combine the uploaded disturbance vectors with balanced data distribution to select a client combination.This method can accelerate the convergence speed of the model,with a maximum reduction of 87% in training rounds compared to the federated average algorithm on the MNIST dataset and 85% on the CIFAR10 dataset.(2)An automated pruning method is proposed for pruning federated learning aggregated models.Based on the TD3 reinforcement learning network and channel pruning method,this method constructs a pruning strategy and applies it to the aggregation stage of federated learning models.By tailoring the model,the model’s size was reduced to 1/11 of the height at the beginning of the training,reducing the traffic generated during the activity by 71% and improving the running speed of the model on the client.By optimizing the communication efficiency in federated learning,the above work can improve the training speed of the model and reduce the communication cost so that more equipment can participate in the training,increase the diversity of training data,improve the generalization ability and robustness of the model,and better meet the actual application requirements of federated learning.
Keywords/Search Tags:Federated Learning, Local Differential Privacy, Channel Pruning, Reinforcement Learning
PDF Full Text Request
Related items