Font Size: a A A

Research On Privacy Protection Technology Of Deep Learning Model Based On Differential Privacy

Posted on:2024-07-28Degree:MasterType:Thesis
Country:ChinaCandidate:G B LinFull Text:PDF
GTID:2568307067472674Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Big data era has greatly promoted the flourishing development of deep learning.The massive training data in daily life can enable deep learning models to have better generalization performance,leading to greater success in practical applications.However,some studies have shown that deep learning models are vulnerable to malicious attacks,resulting in privacy breaches.Therefore,research on privacy protection technologies for deep learning models has gradually become a research hotspot in recent years.Differential privacy is a traditional data privacy protection technique that was originally used to solve privacy leakage problems in statistical databases.Nowadays,it is widely used by scholars both at home and abroad to protect the privacy data of deep learning models.The most classic work among them is the differential privacy stochastic gradient descent(DPSGD)algorithm.DPSGD is an extension of the stochastic gradient descent(SGD)algorithm,which protects the original data by clipping and adding noise to the gradient information.A large number of works have proven that DPSGD is widely applicable and effective in deep learning models.However,some works have pointed out that DPSGD algorithm usually faces a tradeoff between utility and privacy.That is,DPSGD may harm the utility of deep learning models,and the degree of harm is positively correlated with the level of privacy protection it provides.Therefore,how to reduce the damage of DPSGD algorithm to the utility of deep learning models and provide meaningful privacy protection has become an urgent problem.Based on the problems of DPSGD algorithm,this paper designs two methods to improve the accuracy of the differential privacy deep learning model under the same privacy budget,and the main research content is summarized as follows:(1)This paper proposes a differential privacy training scheme with an adaptive gradient clipping mechanism.The scheme includes a search algorithm for the optimal gradient clipping threshold(Greedy-DP algorithm)and two DPSGD adaptive gradient clipping strategies(Transfer and Decay).The Greedy-DP algorithm is designed based on the idea of greedy algorithm and can search for the optimal gradient clipping threshold for the DPSGD algorithm in each round of deep learning model training.The Transfer and Decay strategies allow the deep learning model to adaptively obtain the gradient clipping threshold for each round of training when using the DPSGD algorithm.The experimental results show that the designed training scheme can significantly improve the utility of the model under the same privacy budget.(2)This paper proposes a differentially private training scheme that dynamically adjusts the momentum hyperparameter.The scheme first investigates the impact of setting different momentum parameter values in the DPSGD algorithm on deep learning model training.Based on extensive experimental results,it further proposes a method for dynamically setting the DPSGD algorithm momentum hyperparameter during deep learning model training.The experimental results show that the proposed method can improve the utility of the model under the same privacy budget.
Keywords/Search Tags:Differential Privacy, Differential Private Stochastic Gradient Descent, Deep Learning, Privacy Protection, Performance Optimization
PDF Full Text Request
Related items