Font Size: a A A

Efficient Asynchronous Federated Learning Mechanism For Edge Network Computing

Posted on:2021-08-31Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y LiaoFull Text:PDF
GTID:2518306308978059Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
With the continuous improvement of the performance of the Internet of Things and mobile devices,a new type of computing architecture,edge computing,came into being.The emergence of edge computing has changed the situation where data needs to be uploaded to the cloud for data processing,fully utilizing the computing and storage capabilities of edge IoT devices.Edge computing nodes process private data locally and no longer need to upload a large amount of data to the cloud for processing,reducing the delay of data transmission.On the other hand,artificial intelligence has shown great advantages in various industries,so the demand for implementing artificial intelligence frameworks on edge nodes is also increasing day by day.Federated learning is a computing mechanism that meets these need.Federated learning has the meaning of"joint learning".In the federated learning framework,multiple devices work together to jointly train learning models.Traditional federated learning is a distributed learning framework in which most computing work(such as model training)is performed on the node side.Each node learns locally and gradually optimizes the learning model through interaction with the central server.Because the federated learning mechanism does not require a large amount of centralized data for model training,it is more suitable for edge network devices with limited data to execute machine learning.However,there are some shortcomings in federated learning.For example,during the training process,there are redundant communications between the node and the parameter server.When the scale of federated learning system expands,the cost of these redundant communications cannot be ignored.In addition,traditional federated learning belongs to the category of synchronous learning,which is inconsistent with the "highly mobile" characteristic of edge nodes.Therefore,how to transform synchronous federated learning to asynchronous federated learning,or even asynchronous federated learning based on heterogeneous model fusion,is a question worth studying.Based on the above status,this paper proposes an Efficient Asynchronous Federated Learning Mechanism for Edge Network Computing(EAFLM).EAFLM compresses the redundant communications between the node and the parameter server during the training according to the self-adaptive threshold.Experiments show that when the gradient communication is compressed to 8.77%of the original communication times,the accuracy of the test set is only reduced by 0.03%.For the asynchronous federated learning,this paper proposes a dual-weights gradient correction update algorithm,which allows nodes to join or withdraw from federated learning in any process of learning without affecting the model accuracy.We also studies the fusion and compression of heterogeneous models,so that nodes can perform federated learning without unified model structure,fully utilizing the computing and storage capacity of each node.
Keywords/Search Tags:federated learning, edge computing, asynchronous distributed network, gradient compression, model fussion
PDF Full Text Request
Related items