| With the rapid development of modern network technology,the transmission rate and reliability of wireless networks have been greatly improved.At the same time,rapidly evolving IoT devices provide ever-increasing computing power.Federated learning is proposed to leverage communication and computing resources to perform machine learning tasks on IoT devices.In a federated learning system in a wireless network environment,devices use local data sets to train machine learning models,and base stations aggregate these trained models to ensure data privacy through the isolation of data sets.However,there are huge differences in the data and computing resources of each participant in federated learning,and the differences and changes in the wireless network environment will exacerbate this difference and bring some other performance problems,so the performance of federated learning in the wireless network environment The research of optimization algorithm is of great significance.The specific content of this article is as follows:(1)In the existing work,the same global model is transmitted to the device multiple times through the wireless channel,while the updated local model is only received by the base station,ignoring the wireless broadcast channel,resulting in large communication overhead,and slowing down the throughout the training process.In this paper,the FedBroadcast protocol is proposed to effectively utilize the shared wireless broadcast channel for bidirectional model transfer in federated learning.In the downloading step,this paper lets the base station broadcast a global model for all scheduled devices,and designs a dynamic programming-based algorithm to optimize the scheduled devices.In the upload step,this paper also utilizes the wireless broadcast channel to select some devices to receive all updated local models without waiting for the next round of model downloads.Finally,to address the potential block cycle sampling problem brought about by device scheduling,this paper employs multivariate average aggregation,which improves the convergence performance under extreme data distributions.Extensive experiments show that FedBroadcast outperforms existing synchronous wireless federated learning under different system settings.(2)In the wireless network environment,the data transmission between the device and the base station will occupy a large amount of bandwidth resources,consume energy,and reduce the efficiency of the entire training.The compression ratio of existing lossy compression techniques is determined by its information entropy,which cannot significantly reduce the size of model weights.Model weights are tensors composed of multiple matrices.In view of this feature,this paper proposes a method based on sparsification and quantization to compress intermediate data to speed up training.And a method of dynamically searching the compression ratio is adopted to ensure that a higher compression ratio is adopted when the loss of accuracy does not exceed a given threshold.The experimental results show that FedSQ improves the accuracy and speeds up the convergence of the model.(3)The staleness of the model is a huge challenge for asynchronous wireless federated learning,and the instability of the connection in the wireless network environment exacerbates the influence of this factor.This paper proposes an asynchronous wireless federated learning framework FedSA based on staleness awareness.First,this paper designs a method to dynamically adjust the number of device triggers,which can constrain the maximum staleness of the uploaded local model to ensure the convergence of the training process.Second,we dynamically update the weights of the models during model aggregation on the base station according to staleness to further improve the accuracy of the global model.This paper conducts extensive experiments on FedSA under various experimental settings.Experimental results show that it is significantly better than existing methods in terms of training time and accuracy. |