| In the era of big data,relying on rich data resources,deep learning has made great progress.It has achieved remarkable accomplishments in many fields such as image recognition,speech recognition,and natural language processing.However,the data may contain personally sensitive information,which may lead to serious privacy leakage problems.With the emphasis on personal privacy security and the improvement of relevant laws and regulations,the potential security risks of deep learning seriously hinder its application and development.Achieving privacy protection for deep learning at a low cost has become an important research topic.Due to the simplicity and quantifiable privacy guarantees,differential privacy stands out from a crowd of privacy-preserving algorithms.It safeguards the privacy and security of data in a lighter way and has been successfully applied in privacy-preserving scenarios in several fields.Differential privacy is achieved by adding noise,which affects the model accuracy to some extent.Therefore,deep learning models with differential privacy face a trade-off between privacy and utility.To address this issue,the research content of this thesis is as follows.(1)We propose a differentially private deep learning method based on adaptive feature relevance region segmentation.Firstly,the relevance between each input feature and the model output is obtained based on the relevance analysis.On this basis,the input features are segmented into feature regions with different relevance levels.Then,noise is injected into the input features adaptively based on the regional contribution of each feature region.In addition,a polynomial approximation of the loss function is derived based on the Taylor expansion and noise is injected into its coefficients.In this method,the process of adding noise is independent of the training phase,and the privacy budget does not accumulate in each training step.Therefore,the method can reserve more training space for the model.The results show that the method can achieve a good trade-off between the privacy and utility of the model.(2)We propose a differentially private deep learning method based on adaptive clipping,which is a scheme to add noise to the gradient.The method selects the p-percentile of the l 2-norm of historical gradient as the gradient clipping threshold in the current iteration.The gradients are clipped and noise is added to them.The method adaptively adjusts the size of the clipping threshold according to the training progress.The clipping threshold is used as a parameter in the noise variance,which also affects the amount of added noise.The results show that the method achieves higher model accuracy with smaller privacy levels than the method proposed in the first work.(3)We apply the differential privacy method with adaptive clipping to federated learning and propose a federated learning framework with adaptive differential privacy.In this method,each client adaptively selects the gradient clipping threshold and performs gradient clipping in each local iteration.Before uploading the locally updated model parameters to the server,adaptive noise is injected into the parameters to mask the contributions of each client,thus protecting the privacy of the data.The results show that the method maintains the good utility of the model under strong privacy constraints. |