Font Size: a A A

Research On Privacy Protection Methods For Federated Learning

Posted on:2024-03-31Degree:MasterType:Thesis
Country:ChinaCandidate:X YangFull Text:PDF
GTID:2568306941952709Subject:Master of Electronic Information (Professional Degree)
Abstract/Summary:PDF Full Text Request
Currently,big data-driven machine learning technology has brought serious privacy disclosure issues.Federal Learning alleviates the privacy disclosure issue through training to preserve local data.However,there is still a risk of privacy leakage during the entire training process of federated learning.Therefore,this thesis conducts research on privacy protection methods in the field of horizontal federated learning.The main research contents are as follows:(1)Aiming at differential attacks caused by curious/malicious clients during federated training,a centralized differential privacy federated learning algorithm DP-FedAC is proposed this thesis proposes.Firstly,the federated accelerated random gradient descent algorithm is optimized to improve the aggregation method of the server.After calculating the parameter update difference,the global model is updated using gradient aggregation to improve stable convergence;Then,by adding centralized differential Gaussian noise to the aggregation parameters to hide the contributions of members participating in the training,the purpose of protecting the privacy information of participants is achieved;Moment accounting(MA)is also introduced to calculate privacy losses,further balancing the relationship between model convergence and privacy losses;Finally,the comprehensive performance of DP-FedAC is evaluated by comparing it with algorithms such as FedAC,distributed MB-SGD,and distributed MB-AC-SGD through experiments.The experimental results show that in the case of infrequent communication,the linear acceleration of the DP-FedAC algorithm is closest to FedAC,far superior to the other two algorithms,and has better robustness;In addition,the DP-FedAC algorithm achieves the same model accuracy as the FedAC algorithm while protecting privacy,reflecting the advantages and availability of the algorithm.(2)Aiming at various potential adversary roles and attack methods during federated learning,a differential privacy protection federated learning method with anonymity is proposed to further protect the privacy of the entire training process of federated learning.Firstly,a third-party server is introduced to build a three-layer anonymous federated learning architecture.The FL server and client use the CA to complete public key allocation in advance and use the asymmetric encryption to encrypt and decrypt transmit information,initially meeting the privacy requirements of the architecture;Then,the LDP-FL algorithm is improved by using a privacy budget allocation method based on communication times and adaptive data perturbation to further protect the privacy of local data sets;And uses cosine similarity to quantify and rank the differences between models,selectively aggregating and updating the global model.Finally,the algorithm is validated on the MNIST dataset by constructing a CNN model.The experimental results show that within the appropriate range of privacy budget,the algorithm conforms to the differential privacy law,and the model also achieves high accuracy,indicating that the algorithm well balances the privacy and availability of the model.In addition,the algorithm also has good scalability and low communication consumption.
Keywords/Search Tags:federated learning, privacy protection, differential privacy, Gaussian noise, asymmetric encryption
PDF Full Text Request
Related items