Font Size: a A A

Research On Personalized Models And Efficiency Of Federated Learning Based On Mobile Edge Computing

Posted on:2023-06-16Degree:MasterType:Thesis
Country:ChinaCandidate:S C LiuFull Text:PDF
GTID:2568307076985319Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In the era of mobile internet,users are increasingly aware of the need to protect their privacy.It has become a new hot spot in AI research that how to provide lower latency,more accurate,and reliable services while protecting user privacy and data security.Mobile edge computing reduces communication latency and provides faster services by decentralizing computing tasks from the cloud to edge computing devices closer to the data source.And Federated learning,as a distributed learning paradigm focused on data security,can collaboratively build highperformance machine learning models for multiple users while ensuring that data does not leave the local area,protecting user privacy.Efficient training of more accurate models through federated learning based on mobile edge computing holds great promise for application,but it also faces several key challenges.For example,in the federated learning framework,the communication cost determines the efficiency of the overall system.Moreover,the data in federated learning is generated independently by its owner,which leads to a problem of non-independent identical distribution(non-IID)data,a problem that slows down the convergence of the model and significantly reduces its accuracy.For these problems,this work presents a series of studies on federated learning based on mobile edge computing,with the following main work.This paper proposes a federated learning method based on deep mutual learning to address the problem of non-IID data in federated learning.The approach,as a variant of federated learning,decouples the two roles of the global model as a knowledge carrier and the outcome model in the original federated learning.It allows each learning participant to have an independent personalized model and uses the global model to exchange knowledge between the individual learning participants by introducing deep mutual learning methods.In this way,the non-IID data is transformed from a detriment to the accuracy of the global model to a benefit to the personalized models of the participants.Experiments have shown that this method performs better than the federated average algorithm when both the global model and the participants use the same model.And the personalized models obtained by this method also perform better than models trained independently using only private datasets locally when different models are used for the global model and the participants.To address the efficiency problem in federated learning,this paper proposes an adaptive adjustment algorithm for the amount of local computation based on dynamic metrics.The algorithm reduces the number of global iterations required for model convergence from the perspective of increasing the local computation of each participant in each round of global iterations.It maximizes the benefit of increasing the amount of local computation in each global iteration on the growth of the model accuracy in the context of the actual resource constraints of the computing participants,taking into account the negative costs of energy consumption and load.The algorithm is experimentally shown to be effective in reducing the number of communication rounds and improving federated learning efficiency.
Keywords/Search Tags:federated learning, mobile edge computing, deep mutual learning, adaptive tuning
PDF Full Text Request
Related items