Font Size: a A A

A Secure Aggregation Scheme For Federated Learning Parameters Based On PDMM Theor

Posted on:2024-07-14Degree:MasterType:Thesis
Country:ChinaCandidate:L F ChengFull Text:PDF
GTID:2568307106981759Subject:Software engineering
Abstract/Summary:PDF Full Text Request
State Grid Jiangsu Electric Power Limited Company has accumulated a large amount of data assets through years of development,but these data assets have been in an untapped state due to data security issues and cannot be jointly modeled with data from other industries.The emergence of Federated Learning provides an opportunity for the value realization of power data.However,there are still three major problems with current Federated Learning optimization algorithms:(1)the model accuracy on heterogeneous data is not high and still needs to be improved;(2)the communication efficiency on heterogeneous data is low,mainly reflected in the low convergence rate of the federated optimization algorithm on heterogeneous data;(3)parameter aggregation is not secure.Although existing federated optimization algorithms use data privacy protection technology to achieve parameter aggregation on the server side,they also introduce new problems,such as decreased model accuracy,increased communication costs,untrusted server,and difficulties in algorithm implementation.This paper proposes solutions to these three major problems:(1)To address the problem of low model accuracy and communication efficiency on heterogeneous data,this paper proposes a method that applies PDMM(Primal-Dual Method of Multipliers)optimization theory with sublinear convergence rate to centralized federated learning.This ensures that the model convergence results are not affected by the data distribution of participating parties,while achieving a high convergence rate.(2)To address the issue of insecure parameter aggregation,this paper proposes the use of a distributed PDMM theory-based subspace perturbation method to ensure secure aggregation of centralized Federated Learning parameters.In order to apply the subspace perturbation property of PDMM theory,this paper proposes a method of generating virtual clients on the client side to ensure that the non-convergent space is not empty,and achieves the function of secure aggregation of model parameters without relying on any data privacy protection technology.In addition,to further improve communication efficiency and reduce client-side local computation costs,this paper proposes a method of increasing the number of local iterations on the client side and approximating the local loss function quadratically,and proves the convergence and convergence rate of the PDMM-based Federated Learning parameters aggregation scheme under this method,while also analyzing the continued usability of the subspace perturbation method under this approach.The proposed solution in this paper was applied to linear regression model on synthetic dataset and softmax regression model on the MNIST and Fashion MNIST datasets.Experimental results showed that the solution can achieve high convergence rates and model accuracy on heterogeneous datasets,while maintaining fast model convergence speed and high model accuracy under protected model parameter aggregation.In addition,the convergence speed and accuracy of the solution are not affected by the privacy protection level.
Keywords/Search Tags:Federated Learning, Federated Optimization Algorithm, Parameter Security Aggregation, PDMM Theory
PDF Full Text Request
Related items