Font Size: a A A

Research On Attacks And Privacy-Preserving Methods For Federated Learning Based On Gradient Transformation

Posted on:2024-05-22Degree:MasterType:Thesis
Country:ChinaCandidate:H H YuFull Text:PDF
GTID:2568307061992089Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Although federated learning protects the local data of the participants by exchanging model parameters between the participants and the server,the training mechanism of federated learning also introduces new risks for privacy.Research has found that the exchanged model gradients may also leak the private information of the training data,such as the model inversion attack can deduce the attribute value of the training data from the model.Since users need to train locally and submit model gradients to the server for aggregation,an honest but curious server can leak privacy by intercepting the gradients shared by clients and reconstructing the original data by conducting a gradient leak attack.However,existing privacy protection research such as differential privacy,homomorphic encryption,and secure multi-party computation cannot achieve sufficient defense without high computational cost overhead.The disadvantage of these highly universal privacy protection methods is that it is cumbersome to implement,and the research on federated learning privacy focuses on individual aspects such as security level or performance,but in practical applications,computing speed and privacy risk are equally important.Therefore,it is necessary to comprehensively consider the efficiency and privacy of federated learning,and study the federated learning system that pays equal attention to efficiency and privacy.In addition,no research has focused on the security and performance balance against gradient leak attacks in federated learning and lightweight privacy protection mechanisms.Therefore,this paper focuses on the target defense scheme of a specific attack method in federated learning(ie,gradient leak attack),and proposes a gradient leak attack defense method based on gradient difference and a gradient leak attack defense method based on gradient perturbation that can satisfy both efficiency and privacy.Methods,the main research contents are as follows:(1)A federated learning training framework based on gradient difference is proposed to prevent deep gradient leakage attack and protect data privacy from leakage.Federated learning can protect the security of private data by storing data locally and uploading only training gradients.It can effectively recover the original data and the true label from the gradient.Aiming at this kind of attack,we analyze the cause of gradient leakage from the perspective of matrix vector,and propose a federal averaging algorithm framework based on local gradient transformation to protect the privacy of training gradients.We design a gradient transformation method based on gradient difference,which can effectively prevent honest and curious servers from rebuilding data through the gradient,and ensure the robustness of the algorithm framework.Finally,we conduct defense experiments on 5 datasets with 5000 images,and the results show that the federated model of the proposed method can effectively prevent the recovery of the original data,and has a good effect on the defense of stealing the real label,and the more data types,the better the defense effect.The defense rate in cifar10 is more than 90%.At cifar100 it reaches 98%.In addition,in terms of computational performance,compared with federal differential privacy and homomorphic encryption mainstream defense methods,the proposed method can maintain accuracy without compromising speed.(2)For the non-independent and identically distributed scenario of depth gradient leakage attack,we prove that the sensitivity change of the gradient with respect to the training data is an important factor to measure the information leakage risk.Based on this observation,we propose a novel defense method by perturb the gradient to match the information leakage risk,so as to reduce the defense overhead when the privacy protection is sufficient.Our other key finding is that the global correlation of gradients can compensate for this perturbation.Based on this compensation,training can achieve guaranteed accuracy.Experiments are conducted on MNIST,Fashion-MNIST and CIFAR10 to defend against two gradient leakage attacks.Without sacrificing accuracy,our lightweight defense can reduce the PSNR and SSIM values between the reconstructed and original images for both attacks by more than 60% compared to the baseline defense methods.
Keywords/Search Tags:Federated Learning, Gradient Transformation, Privacy Protection, Gradient Leakage, Differential Privacy
PDF Full Text Request
Related items