Font Size: a A A

Research On Differential Privacy Optimization Algorithm For Deep Learning

Posted on:2022-11-01Degree:MasterType:Thesis
Country:ChinaCandidate:Y H HuFull Text:PDF
GTID:2518306770471954Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the advent of the era of big data,deep learning algorithms based on neural networks are widely used in real-life scenarios such as biomedicine and facial recognition.However,attackers with different background knowledge gain benefits by directly obtaining sensitive information of raw data or indirectly extracting model parameters,and even lead to model classification errors by generating adversarial examples.The privacy leakage and security risks faced by deep learning directly hinder the development of deep learning.Therefore,the privacy protection of deep learning has gradually become a research hotspot in recent years.Differential privacy,as a definition of privacy that provides strict mathematical proofs,aims to protect sensitive information from being inferred by attackers.At present,scholars have extended the differential privacy mechanism to the deep learning model.The main method is to introduce a noise mechanism with a specific distribution to the protected privacy unit,and measure the degree of model privacy protection through the privacy budget which represents ?.The less the privacy budget is allocated,the stronger the protection of the privacy unit.Among them,the most classic differential privacy deep learning algorithm is differential privacy stochastic gradient descent(DPSGD),which realizes the protection of neural network gradients in image classification tasks.However,the privacy of DPSGD is based on sacrificing the utility of the model,so it is urgent to propose a solution to balance the privacy and utility of the model.At the same time,DPSGD algorithm has been proven to be difficult to resist the interference of adversarial examples,resulting in models failing to achieve expected results on different tasks.Therefore,how to improve the robustness of the differential privacy model while ensuring the privacy of the model is also a hot topic worthy of research at present.Based on the problems existing in the DPSGD algorithm,this paper designs two solutions to improve the privacy,utility and robustness of the model.The main research contents are as follows:(1)Aiming at the problem that the privacy and utility of the DPSGD model are difficult to balance,an adaptive clipping bound deep learning with differential privacy algorithm(ACDP)is proposed.First,from the perspective of gradient type,number of iterations and network layers,the impact of cutting boundaries on the model is evaluated,so that different types of gradients can be clipped hierarchically;The gradients are grouped and aggregated by distance calculation,so that the gradients with similar gradient values are a cluster,and the gradients within the cluster are set with the same clipping bound;Finally,the setting of the adaptive clipping bound is quantified based on the standard deviation as the objective function,and theoretically the above proves the rationality of the clipping bound setting.(2)In view of the poor robustness of the DPSGD model,a deep learning with differential privacy algorithm with robust adversarial weights(Adv RGDP)is proposed.First,a weighted gradient extraction method with a separate gradient proportion to the mean gradient is used to divide the correlation between the gradient and the model output into different types of gradients.Subsequently,the strong gradients are fed into the shallow network for adversarial training,thereby enhancing the randomness of the model parameters.The strong gradient maintains the original gradient information.For weak gradients,we choose to drop them,thereby reducing the complexity of matching gradients to features.(3)The privacy analysis of the ACDP algorithms is carried out,the ACDP algorithm and the Adv RGDP algorithm are verified on the four basic datasets respectively.Compared with the DPSGD algorithm,the model can achieve a certain degree of privacy while enhancing the model's utility and robustness.Finally,the classical adversarial attacks FGSM and FGM are used to demonstrate the superiority of the Adv RGDP algorithm in resisting the interference of adversarial samples.
Keywords/Search Tags:Differential privacy, Deep learning, Clipping bound, Robustness
PDF Full Text Request
Related items