Font Size: a A A

Research On Multiple Parallel Federated Learning Via Over-the-Air Computation

Posted on:2024-08-17Degree:MasterType:Thesis
Country:ChinaCandidate:G X ShiFull Text:PDF
GTID:2568306920983759Subject:Electronic information
Abstract/Summary:PDF Full Text Request
As a brand new distributed machine learning paradigm,federated learning(FL)can cooperately train shared machine learning models by multiple edge nodes with no sensitive data collection required,which solves the problems of high communication delay and privacy concern existing in traditional cloud-based centralized machine learning.It is one of the distributed machine learning technologies with great application prospects.At present,most studies on FL focus on the training of a single model.However,with the development of the 5th Generation of Mobile Communication System(5G),Internet of Things(IoT)and intelligent devices,there are often different tasks under a large number of scenarios that need to utilize FL to train machine learning models.How to conduct parallel multi-task training to improve the convergence speed and accuracy of the model is one of the key research points.On this basis,the main contributions and innovations of the thesis are summarized as follows:1)parallel multi-task federated learning framework based on over-the-air computation(OTA)is established,it utilize OTA to upload and aggregate FL model,which realize a single cloud server scheduling multiple FL tasks in parallel.Firstly,the aggregation error caused by device selection and communication noise is quantified,which guide the design of the transmitter power factor and receiver scaling factor to minimize the error.Then,the influence of aggregation error on the convergence of FL is systematically analyzed,which is further translated into the influence of device selection and communication noise on the convergence.Based on this,the optimization problem of joint optimization of receiver combining and device selection strategy is established.Finally,the optimization problem is decomposed into receiver combining subproblem and device selection subproblem which is alternately solved.For combining subproblem,successive convex approximation(SCA)method is used to solve it,while for deviece selection subproblem,greedy algorithm and Gibbs sampling method are used respectively.Experiments on three parallel FL models shows that the proposed framework can efficiently schedule multiple federated learning tasks in parallel,and the test accuracy of the model is close to the theoretical optimal.2)The analysis in Work 1)shows that when communication conditions are poor,the selection ratio of devices should be reduced for reducing the adverse effect of communication error on model convergence,but at the same time,it will reduce the amount of data and is not conducive to model convergence.Therefore,in order to select more devices to participate in the training process,reconfigurable intelligent surface(RIS)is introduced in this thesis to improve the channel condition of device.Based on the system model of Work 1),parallel multi-task federated learning assisted by RIS is further studied.Firstly,according to the convergence analysis results in Work 1),the joint optimization problem of receiver combining,RIS phase shift and device selection is established.Secondly,the optimization problem is decomposed into the joint design of receiver combining and RIS phase shift subproblem and the device selection subproblem.For the non-convexity of RIS phase shift design,the idea of difference of convex algorithm(DCA)is proposed to transform it into a series of continuous convex problems to solve.Finally,the simulation results show that with the assistance of RIS,the training efficiency and precision of FL are significantly improved.In summary,aiming at efficiently train multiple federation learning models parallelly in a scenario with limited wireless communication resources,the thesis firstly constructs a parallel multi-task federated learning framework based on OTA.Then,RIS are introduced to improve the uplink channel conditions,so as to increase the proportion of devices participating in the training process.As a result,the robustness of the framework to heterogeneous data is improved.
Keywords/Search Tags:Federated learning(FL), over-the-air computation(OTA), device selection, reciever combiner, reconfigurable intelligent surface(RIS)
PDF Full Text Request
Related items