| Federated learning as a distributed machine learning paradigm has rapidly attracted great attention,since it allows various participants(also called clients)to join the AI model training without the requirement of sharing their private data,so as to break data islands and protect user privacy.Similar to the distributed machine learning paradigm in the data center,federated learning plays a role in iteration.In each iteration,the client first obtains the global model to be trained from the server,then uses its own data to train the local model,then uploads the trained local model parameters to the server,and finally the server aggregates the received model parameters to update a new global model and start the next iteration until the model reaches the target inference precision.In federated learning,due to the heterogeneity of clients such as different network access conditions,computing capabilities,and private datasets,how to organize such clients to achieve efficient model training has been a critical issue.Existing research on the training efficiency of federated learning are all for single model training and are not suitable for parallel multi-model training,because they restrict that one client can train at most one model in a round,while the server requires the whole training process to wait for the slowest client to complete one round of training,thereby wasting a lot of idle resource at those powerful clients.This thesis studies a multi-model federated learning system,to improve the training efficiency of mutiple models.In this system,multiple models are assigned to multiple clients,and each client may take several models for training,such that all the assigned models can be trained by these clients in parallel without wasting client resource.In particular,this paper proposes multi-model assignment algorithms,aiming to improve the overall training efficiency of the system while ensuring a certain fairness among training efficiencies of individual models.Specifically,the main contributions are as follows:(1)A multi-model assignment based on client resource is proposed to improve the training efficiency of federated learning while ensuring the fairness among the individual training efficiencies of models.Specifically,this thesis describes the relationship between the training efficiency of a model and the number of used clients for its training.According to this relationship,the multi-model assignment problem is transformed into an optimization problem,and the Logarithmic Fairness based Multi-model Balancing algorithm is proposed to solve it.Simulation results show that the algorithm not only improves the overall training efficiency,but also ensures the fairness among the individual training efficiencies of models.(2)A multi-model assignment combining model current precision is proposed to further improve the training efficiency of federated learning.Specifically,this thesis describes the relationship between the training efficiency of a model and the number of used clients for its training as well as the current accuracy of the model.According to this relationship,the multi-model assignment problem is transformed into an optimization problem,and the Precision and Logarithmic Fairness based Multi-model Balancing algorithm is proposed to solve it.Simulation results show that the algorithm can further improve the training efficiency in some scenarios on the premise of sacrificing certain fairness. |