Font Size: a A A

The Research Of Security And Privacy In Deep Learning

Posted on:2022-03-02Degree:MasterType:Thesis
Country:ChinaCandidate:F C YuFull Text:PDF
GTID:2518306338973249Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,with the accumulation of massive data and the development of computing power,artificial intelligence technologies have been widely used in many fields,such as computer vision,speech recognition,and natural language processing.However,due to the dependence of artificial intelligence systems on data and the unexplainability of deep learning algorithms,the adversary can launch various types of attacks with background knowledge,which leads to serious security and privacy risks for the current artificial intelligence systems.According to the different purposes of the adversary,these attacks can be divided into two types.The first type can be generally summarized as the privacy problem of deep learning,such as the attribute inference attack and the model inference attack,which take the model training data and its parameter information as the target;The second type can be generally summarized as the security problem of deep learning,such as the adversarial example attack,which purpose is to mislead artificial intelligence systems.In order to solve the privacy problem in deep learning systems,an effective method is to combine the differential privacy protection model.At present,the widely used differential privacy algorithm is DPSGD(Stochastic Gradient Descent with differential privacy)in deep learning.However,the parameter setting is difficult for DPSGD(Stochastics Gradient Drop with Differential Privacy).And the measurement of privacy loss is also complex.We propose DPADAM(Adaptive Moment Estimation with Differential Privacy)as a new deep learning optimization algorithm with privacy protection,which combines Adam gradient optimization algorithm and differential privacy.And we introduce zCDP(Zero-Centralized differential privacy)as a measure of privacy loss,which is more flexible and accurate.Extensive evaluation results show that DPAD AM can reduce dependence on parameter settings and improve the model's fitting effect.In order to solve the problem of the adversarial example in deep learning systems,this thesis proposes a general defense model based on conditional generative adversarial networks.In this model,the defense of the adversarial example is regarded as an image-to-image process.We use the generator to eliminate the adversarial disturbance and map the adversarial example to a clean example.Experiments show that the defense framework can effectively resist various types of attacks,and the defense performance is not inferior to the current advanced defense mechanism.
Keywords/Search Tags:deep learning, differential privacy, adversarial example, generative adversarial network
PDF Full Text Request
Related items