Font Size: a A A

Study Of Privacy-preserving Federated Learning Methods Based On Homomorphic Encryption And Secret Sharing

Posted on:2023-03-07Degree:MasterType:Thesis
Country:ChinaCandidate:Y W YuFull Text:PDF
GTID:2568307046493764Subject:Computer Science and Technology Computer Technology
Abstract/Summary:PDF Full Text Request
In the cloud computing environment,traditional machine learning solutions require users to upload their local data to the server for centralized training,which may lead to the leakage of private information such as users’ behavioral patterns and consumption habits.To cope with this problem,federated learning is proposed,which only requires users to upload trained model parameters without uploading local data,and thus becomes a promising method for distributed neural network training.However,there are still many problems with federated learning,such as accuracy degradation due to class imbalance problem and the ability of malicious clients to exploit processed data to poison the global model,etc.Many recent studies have proposed solutions to these problems,but most of them require users to upload additional information or do not protect the transmitted data well enough to protect users’ privacy.Therefore,it is of great theoretical importance and application value to construct secure and efficient federated learning frameworks that can solve the above problems.In this paper,we focus on privacy-preserving federated learning methods based on homomorphic encryption and secret sharing,with the main work as following:(1)Based on Duan et al.’s self-balancing federated learning framework(Astraea scheme),we construct a self-balancing federated learning framework,called Secure Astraea,which can protect the privacy of user data.The effect of class imbalance on the federated learning accuracy is reduced by coordinating the clients into a group of clients with relatively balanced data distribution for centralized training.Moreover,the additive homomorphic nature of homomorphic encryption is used to aggregate the user’s model parameters under ciphertext to avoid attacks such as membership inference.Secure Astraea also optimizes the original scheme at the algorithm level by introducing class weights to further improve the accuracy of the model when class imbalance is encountered.Then security analysis and efficiency evaluation are performed,and experimental results show the feasibility of the Secure Astraea scheme.(2)A federated learning framework that protects privacy and is resistant to poisoning attacks,called SCONTRA,is proposed based on the CONTRA scheme.We analyze the privacy leakage problem of the CONTRA scheme,use the secret sharing technique to protect the user’s model parameters in the new SCONTRA,design a corresponding protocol to calculate the similarity between user parameters to obtain the corresponding reputation score and adjusts the aggregation weights of users according to this reputation score.This paper also analyzes the basic calculation method and designs a more efficient optimization scheme to reduce the communication in the calculation process.Then,analysis as well as experiments are conducted,and the experimental results show that the SCONTRA scheme is consistent with the original scheme in terms of effectiveness against poisoning attacks and is more secure.
Keywords/Search Tags:Federated Learning, Privacy Protection, Class Imbalance, Poisoning Attack, Homomorphic Encryption, Secret Sharing
PDF Full Text Request
Related items