Font Size: a A A

Research On Defense Adversarial Examples In Deep Learning

Posted on:2023-01-02Degree:MasterType:Thesis
Country:ChinaCandidate:C L YeFull Text:PDF
GTID:2568306815468584Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Deep neural networks have been successfully employed in a variety of domains in recent years,including autonomous driving,face recognition,and medical systems.Deep learning models,on the other hand,have been found to be vulnerable to adversarial examples in the prediction stage,resulting in the target model producing a highconfidence incorrect prediction.This attack poses a significant threat to deep neural network applications.Researchers have developed defense approaches such as adversarial training,input preprocessing,and particular defense to solve this problem.The generalization performance of specificity defense methods is poor since they generally only defend against certain adversarial attacks.As a result,this dissertation proposes two improved defense methods,starting with adversarial training and input preprocessing.The following are the main research work and contributions:(1)Adversarial training is the most intuitive defense method against adversarial attacks,and it performs well and is widely used.However,the model’s training cost is relatively high.This dissertation presents a random perturbation method based on FGSM(Fast Gradient Sign Method)to solve this problem,as well as a periodic learning rate mechanism in the model training process to make it more stable.The results of the experiments show that the method can ensure the model’s robustness while reducing the training costs.In addition,in order to acquire a better grasp of the nature of adversarial training,this dissertation through saliency maps explains why the robust model performs well in generalization.(2)Adversarial examples are formed by adding perturbations to the original examples,which are inherently unstable.This dissertation proposes an adversarial perturbations elimination method based on generative adversarial networks,in which a generator trained to generate adversarial networks converts adversarial examples into clean examples for the purpose of cleansing adversarial perturbations,and then feeds them back to the discriminator of the generative adversarial network.The U-Net structure is introduced to the discriminator,which improves the discriminator’s training efficiency as well as the quality of the examples recovered by the generator.The defense method has strong generalization performance and can effectively eliminate any perturbation generated by single-step and iterative attacks.Figure 34 Table 5 Reference 58...
Keywords/Search Tags:deep learning, adversarial examples, adversarial training, generative adversarial networks
PDF Full Text Request
Related items