Font Size: a A A

Local Model Privacy-Preserving Study For Federated Learning

Posted on:2022-06-01Degree:MasterType:Thesis
Country:ChinaCandidate:K Y PanFull Text:PDF
GTID:2518306479993339Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the continuous popularization and increase of mobile devices,as well as the improvement of their own computing and storage capabilities,the concern on data security and data privacy protection issues has been gradually growing in today's society,and federated learning which has emerged from the field of data privacy protection is also being emphasized.As an approach of distributed machine learning,federated learning requires the original private data to be stored and calculated on the local clients instead of directly uploading the original data to an untrusted central server,this approach can significantly prevent the user privacy to be attacked by external adversaries.Nevertheless,by analyzing the differences on the model parameters uploaded by the clients,such as the weight parameters trained in the deep neural network,user privacy is still at risk of being leaked.This paper mainly studies the algorithms under two classical federated learning settings,combining with the requirement analysis for local model privacy–preserving to portray the ability of classical federated learning algorithms for local model privacy–preserving,and then proposes our secure federated learning algorithm,i.e.,Private Push–Sum Gradient Descent Algorithm.Through experimental results,this paper analyses the ability for local model privacy preserving and verifies that comparing with the classical federated learning algorithms,the algorithm proposed in this paper makes the adversary more difficult to decrypt the local model,which also reflecting the effectiveness and the superiority of this algorithm for local model privacy–preserving.The main work and contributions of this paper will be presented in the following aspects:·Based on the study of classical federated learning algorithms and the analysis of the needs of local model privacy–preserving,this paper proposes a concept to measure the ability of federated learning algorithms to protect the local model privacy—model privacy,so as to describe the ability for local model privacy–preserving of federated learning algorithms.·For two kinds of classical federated learning settings,i.e.,cross–device and cross–silo federated learning setting,combining with the proposed concept of model privacy,our work proves that through counter examples when the clients locally train the linear regression tasks,the adversary has the ability to decrypt the clients' local models successfully,i.e.,the classical federated learning algorithms studied in this paper can't protect local model privacy.·This paper proposes a new secure federated learning algorithm,and proves the convergence of this algorithm,then describes the corresponding privacy security analysis for local model through the experimental results,further,the effectiveness and security of the proposed algorithm are verified by analyzing the influence of relevant parameters on the local model privacy–preserving performance.·By referring to the concept of differential privacy,the machine learning method is used to design the adversary who has the ability to eavesdrop and collect the time series data transmitted on the channels using the collected data to learn a classification model during the execution of federated learning algorithm.Since the data collected by the adversary is of the nature of time series,this paper designs an adversary with eavesdropping ability to conduct classification training on the collected data.Experimental results show that the proposed security algorithm has little effect on the prediction results of the adversary classification model before and after changing the local model of one client,i.e.,changing the input data has little effect on the output results.Through comparative experiments,it is proved that the algorithm proposed in this paper has better ability to resist the attack of the classification model trained by the adversary,i.e.,it makes the adversary more difficult to decrypt the local model privacy.
Keywords/Search Tags:Federated Learning, Privacy-Preserving, Differential Privacy, Distributed Optimization
PDF Full Text Request
Related items