| In recent years,the continuous development of artificial intelligence technology has created increasingly powerful AI applications The development of these artificial intelligence technologies cannot be separated from the support of digital infrastructure such as big data and big models.However,as data holders’ attention to their respective data privacy increases,obtaining high-quality data to drive neural network model training will become increasingly difficult.As a multi-party collaborative and non-invasive of data privacy distributed machine learning approach,federated learning provides new possibilities for solving the problem of data isolation.Federated learning still faces challenges such as user heterogeneity,limited bandwidth resources,and Non-IID data.In complex scenarios,Synchronous federated learning can drag down the iteration frequency due to the bottleneck users,while asynchronous federated learning can cause a significant waste of communication resources.Additionally,due to the lack of collaboration among users,the asynchronous method will further expand the negative impact of Non-IID data.In order to solve the above problems and bridge the defects between different aggregation patterns,this paper first studies the convergence performance of federated learning.Through the L-Lipschitz and strongly convex of the federated learning loss function,this paper analyzes the possible factors that affect the convergence rate of the federated learning,and finally obtains the mathematical relationship between the loss function and the number of iterations of the federated learning,which proves that the federated learning gradually converges with the number of iterations in a specific case,laying a theoretical foundation for the design of the federated learning mechanism.Based on the above research,this paper takes the model update of the bottleneck user as a periodic consideration,and proposes a variable frequency aggregation mechanism between synchronous aggregation and asynchronous aggregation modes.In this mechanism,the central node will plan a series of aggregation instructions according to the frequency of each user and the size of local data,so that different users can aggregate models with other users who upload model parameters in a suitable number,maximize the convergence speed of federated learning and reduce the consumption of communication resources caused by frequent aggregation,as well as the impact of NonIID data on accuracy.This article designs a series of algorithms to find the optimal solution of the mechanism or the suboptimal solution when the problem size is large.At the same time,this series of algorithms have undergone a series of designs to reduce solution time,enabling the mechanism to achieve relatively good decision-making results at a faster speed even in large-scale user scenarios.In order to test the actual effectiveness of the proposed mechanism,this paper designs and implements a federated learning platform.The platform consists of aggregation side,EdgeX,and device side.The aggregation side is located at the central node,performs aggregation control and other functions.The device side is deployed in a large number of terminal devices,trains their local models based on local data.Both sides interact with each other through the data bridge provided by EdgeX.By running federated learning tasks on this platform and comparing them with various control mechanisms,this article verifies the effectiveness of the proposed aggregation mechanism.Moreover,in most scenarios,the mechanism proposed in this article exhibits faster convergence speed compared to other control mechanisms. |