| Federated learning is a promising learning paradigm that can tackle the increasingly prominent isolated data islands problem while keeping users’ data locally with privacy and security guarantees.However,FL could result in task-oriented data traffic flows over wireless networks with limited radio resources.To design communication-efficient FL,most of the existing studies employ the first-order federated optimization approach that has a slow convergence rate.This however results in excessive communication rounds for local model updates between the edge devices and edge server.To address this issue,this paper will design federated learning methods with high communication efficiency based on second-order optimization algorithms and over-the-air computing techniques,in terms of centralized and decentralized architectures,respectively.For the centralized architecture,this paper adopts the idea of local Newton update step aggregation,which avoids the communication of information such as Hessian matrix and local gradient between the server and the device,while using the second-order information to achieve faster convergence,which greatly reduces the total number of communication rounds during the training process.In addition,this paper employs an over-the-air computing technique to implement the aggregation process of local Newton update steps,which further reduces the communication overheads per round by combining communication with computation.The convergence analysis results reveal the impact of errors caused by device selection and channel noise,and the corresponding joint optimization method is proposed in this paper.The results of extensive simulation experiments verify the effectiveness of the proposed method.For the decentralized architecture,this paper implicitly utilizes the second-order curvature information of the loss function based on the mirror descent update at the device side.The overhead incurred from computing and transmitting the Hessian matrix is thus reduced while achieving faster convergence.In addition,for the process of information mixing among devices in the decentralized scenario,the same over-the-air computing technique is used to achieve efficient model aggregation.The cumulative error effect of channel noise due to device consensus is explained in the convergence analysis results.This paper controls such cumulative effect by the design of consensus stepsize and minimize the effect of individual noise error terms through beamforming optimization,respectively.The proposed method is then evaluated by comprehensive simulation experiments. |