| With the advent of the era of the Internet of Everything and the development of mobile communication technology,the current network edge devices and the amount of generated data have increased dramatically.With those data,machine learning algorithms can effectively extract and summarize the characteristics of things.However,traditional machine learning needs to collect data centrally,which brings high data transmission and storage overhead as well as the risk of data privacy leakage.To address the above problems,federated learning algorithms have attracted extensive attention in recent years.As a distributed machine learning algorithm,federated server collaborates with multiple clients to train machine learning models at the edge of the network without sharing their sensitive data.However,due to the heterogeneity of computing and communication capabilities among clients,they may not return training results to the server at the same time.In particular,in synchronous federated learning,clients with bad computing and communication capabilities will greatly increase the waiting delay of the server and reduce training efficiency.Although asynchronous federated learning can reduce latency,aggregating global models in a fully asynchronous manner may cause some local models to become stale,resulting in low training accuracy.In response to the above problems,this paper proposes a novel adaptive heterogeneous semi-asynchronous federated learning mechanism,named Adaptive HSA_FL,which is used to solve the problem of slow training speed of synchronous federated learning in heterogeneous environments and solve problem caused by inconsistent model in asynchronous setting.To slove the problem of slow training speed of federated systems with heterogeneous devices,this paper adopts a strategy of assigning different training intensities to heterogeneous devices.Strong devices are trained more and weak devices are trained less.We first use a multi-armed bandit approach to build the client’s heterogeneous communication and computing capabilities,and learn with historical training information.On this basis,this article allocate the appropriate training intensity to the client according to the client’s computation capabilities.Generally speaking,less local updates are allocated weak clients,while more local updates are allocated to strong clients.Aiming at the problems that synchronous setting waits too long on the server and that synchronous training’s low performance,this paper proposes a semi-asynchronous communication strategy.The server does not have to wait for all results before aggregation or aggregate immediately after obtaining one result.And,we a suitable aggregation strategy for this scheme.Finally,the simulation results show that,compared with the existing synchronous and asynchronous algorithms,the proposed scheme can effectively reduce the training time,improve the training accuracy,and is superior to the comparison algorithms in terms of the convergence speed of the accuracy rate. |