Font Size: a A A

Research On Lightweight And Decentralized Privacy-Preserving Federated Learning

Posted on:2024-06-02Degree:MasterType:Thesis
Country:ChinaCandidate:L Y ZhongFull Text:PDF
GTID:2568307052495984Subject:Electronic information
Abstract/Summary:PDF Full Text Request
Federated learning allows multiple users to collaboratively train global machine learning models by keeping their data sets local.Since there is the difference in architecture,federated learning can be divided into federated learning based on user-server architecture and federated learning based on decentralized architecture.In federated learning based on user-server architecture,the local gradients of the users are also known as model updates and users send them to the aggregation server.The aggregation server will aggregate the gradients.In federated learning based on decentralized architecture,users usually send local gradients directly to other users,and users aggregate the gradients locally.However,federated learning of the above two architectures faces the security threat that the gradients may leak sensitive information on the user’s data set.There are lots of privacy-preserving federated learning schemes that have been proposed to protect gradient privacy.However,the existing privacy-preserving federated learning schemes suffer from limitations.This paper proposes two schemes to solve these limitations.The main works come as follows:1.The existing privacy-preserving federated learning based on user-server architecture have the problems of high computation and communication overhead,loss of model accuracy,and failure to support user dynamics.To address the above problems,this paper propose a lightweight privacy-preserving FL scheme based on a dual-server architecture and secure multi-party computation.The scheme relies on key agreement,hash functions and symmetric encryption to guarantee gradient privacy,and support users join or quit in an FL task at any training round.The scheme is also lossless in terms of accuracy,because no noise is added to users’ gradients.By experiments,this paper proves that the scheme has low computation and communication cost,model accuracy lossless property,and supports user join or quit efficiently.This paper uses security analysis to prove the gradient privacy is protected in our scheme.2.The existing privacy-preserving federated learning based on decentralized architecture have the problem of loss of model accuracy.To address the above problem,this paper proposes a decentralized privacy-preserving FL scheme based on secure multi-party computation.The scheme relies on key agreement,symmetric encryption and secret sharing to guarantee gradient privacy,and support users join or quit in an FL task at any training round.The scheme is accuracy lossless,because no noise is added to users’ gradients.Experiments and security analysis prove that the proposed scheme is model accuracy lossless,supports users to join or quit,and protects gradient privacy.
Keywords/Search Tags:Federated learning, Gradient privacy, Secure multi-party computation, Secure aggregation, Privacy preservation
PDF Full Text Request
Related items