Font Size: a A A

Defense Against Adversarial Attacks By Reconstructing Images

Posted on:2021-01-23Degree:MasterType:Thesis
Country:ChinaCandidate:Q X RaoFull Text:PDF
GTID:2518306050972009Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
With the rapid development of deep learning,deep neural networks have been used as an automatic tool in more and more fields,such as image classifier,speech recognition system,text translation,etc.With the support of deep learning technology,we can deal with the tasks rapidly and accurately.However,deep neural networks are applied without enough security in real-world scenarios.In all aspects of deep learning,such as data preparation,network training,model deployment,etc.,the security of deep neural networks may be threatened,which will lead to system function failures or data leakage,and threaten security of the lives and properties.In the field of image processing,convolutional neural networks(CNNs)are vulnerable to adversarial examples,whose attack principle is to add small and human imperceptible perturbations to the image,and the CNN classifier will make wrong judgment on them.In view of the principle of adversarial examples,many scholars have proposed many corresponding defense methods,which are mainly by retraining the classification model or improving the structure of the model to fit the data distribution of adversarial examples.However,such defense methods are not effective and can be attacked by new algorithms easily.this paper starts with the perspective of image pre-processing method,and transforms the defense task into the image denoising task.Combining traditional image processing and deep learning technology,we resist adversarial attacks without changing the parameters and structure of the original classification model in this paper.Mainly,the research content of this paper is in the following two points:(1)a new defense method against adversarial examples is proposed.The method is composed of a image reconstruction network and a randomization layer.Specifically,the structure of the image reconstruction network is residual,which can avoid the problem of gradient disappearance or explosion caused by the increase of network layers.So,it perform better in processing the tiny details of images.In the image reconstruction network,BN layer and PRe LU activation layer are combined to make the model training more efficient and stable.The loss function of the network is perceptual loss,which is calculated by feature maps of the feature extraction network.It can suppress the error amplification effect to a large extent and improve the performance of the reconstructed network.Inaddition,the last part of the defense network is the randomization layer,which can further reduce the impact of the residual perturbations on the classifier by randomly resizing and padding the output of the reconstructed network.(2)Verify the effectiveness and generalization ability of our defense method.For the three common CNN classifiers,we used eight main methods to generate adversarial examples on Image Net as the data set.Then,we used the data set to train and test our defense model,which achieved classification accuracy ranges from 46% to 99%.Moreover,we compared our defense model with other methods based on image pre-processing,and proved that our method is more effective than other state-of-the-art defense methods.In order to analyze generalization ability of our model,we used the model to defend against other attack methods,which proves that this method can defend against other attacks and adaptive attacks effectively.In this paper,to further optimize the model,we also carried out ablation studies to validate the effect of some part of the defense network.
Keywords/Search Tags:Adversarial example, Neural network, Residual learning, Perceptual loss
PDF Full Text Request
Related items