| With the continuous advancement of science and technology,deep learning technology has been greatly developed and widely used,and excellent deep learning models need to rely on a large number of datasets for training,and the datasets often contain sensitive information.The continuous improvement of model accuracy means that the problem of personal privacy leakage is becoming more and more serious,and attackers will use various attack methods to carry out privacy attacks.As a privacy protection technique with strict mathematical guarantees,differential privacy eliminates the background knowledge of the attacker and achieves perturbation by adding random noise.The combination of differential privacy and deep learning perfectly solves the problems associated with deep learning,balances data and model privacy and availability,and promotes the development of technology.This paper studies a variety of privacy protection methods and sorts out the characteristics of attacks and protection methods for deep learning.Aiming at the analysis of different stages of deep learning,the differential privacy mechanism is deployed according to the input,hidden and output layers of the network model,and the following two methods are proposed according to the existing differential privacy protection methods:In order to improve the accuracy of the model,a Gaussian differential privacy protection method based on coordinate adaptation is proposed.Gaussian differential privacy analyzes the differential privacy definition from the perspective of hypothesis testing analysis,providing tighter privacy assurance.In the training process,Poisson sampling is used and the stochastic gradient descent algorithm is used to adaptively clip and add noise according to the sensitivity of each gradient in different dimensions to prevent the introduction of more noise.Using the nature of Gaussian differential privacy,the whole process of loss-free tracking of the privacy loss of the trained overseeded is carried out,and the overall privacy loss is calculated.Experiments verify the feasibility of this method,and can achieve better accuracy and stability under the same privacy budget.In order to improve the privacy protection effect of the model,this paper proposes a Gaussian differential privacy protection method based on the function mechanism.The combination of gradient perturbation and objective function perturbation is used to double perturbate the model,and in the process of gradient perturbation,Adam optimizer is selected for training,so that the saddle point escape behavior will not be affected by the flatness of the saddle point too much,and the stability of the model is improved.In the process of objective function perturbation,the function mechanism is used to convert the loss function into the form of polynomial,add random noise to its coefficients to achieve differential privacy protection,and experiment analyze the allocation mode of privacy budget.This method has better privacy protection effect and greater scope of application.The requirements and functions of privacy protection tools are analyzed,the Gaussian differential privacy protection method based on coordinate adaptation is used to develop tools,and finally the tool functions are tested. |