Font Size: a A A

Adversarial Examples Defense Method Based On Parallel Attention Mechanism

Posted on:2022-11-10Degree:MasterType:Thesis
Country:ChinaCandidate:J ZhaoFull Text:PDF
GTID:2518306758491574Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
The development of deep learning has greatly promoted social progress,playing an important role in speech recognition,image processing,language translation,malware detection,etc.Adversarial examples attack is a malicious attack against image recognition tasks in deep learning,in which an attacker causes an image classification model to misclassify an image by adding malicious perturbations to the input image that are imperceptible to the human eyes.Images with added malicious perturbations are called adversarial examples,which will compromise the functionality of the classification models and pose a serious threat to the development of deep learning.By summarizing the current mainstream adversarial examples defense methods,we find that many defense methods only focus on the study of pixels in the adversarial examples,expecting to defense the adversarial examples by removing the malicious perturbations hidden in pixels,and to regenerate a purified image to eliminate the negative effects of adversarial examples on the accuracy of target model.However,these defense methods ignore the features of whole picture and cannot completely eliminate the malicious perturbations hidden in the picture.Also,most of the existing adversarial examples defense methods ignore the impact of multi-scale features in images.The scale of different features in a picture are generally not the same.This situation makes the deep learning models have omissions when extracting the image feature information and reduces the classification accuracy of the images purified by adversarial examples defense methods.To solve the problem that malicious perturbations of adversarial examples are difficult to remove,we are inspired by the mammalian visual system and propose to use an adversarial generative network which includes a parallel attention mechanism.The adversarial generative network is responsible for learning and generating purified images,while the parallel attention mechanism guides the adversarial generative network by learning information on individual characteristics and spatial information of the images,allowing the defense model to pay attention not only to the pixel-level features of the images but also to the overall features and spatial information of the objects when purifying,and to produce images with better higher degree of purification.We evaluated the method under the white-box and black-box backgrounds respectively,using multiple attack methods on MNIST and CIFAR10 datasets,and compared the results with other defense models.It is found that the defense method we proposed can achieve the optimal defense effects under the attacks of both scenarios.To solve the problem of multi-scale features of the adversarial examples images,this paper proposes a multi-scale feature dual parallel attention adversarial sample defense method.The method uses a multi-scale feature module to extract features at different scales in the input images,uses a parallel attention feature fusion module to fuse the multi-scale feature maps,and finally generates purified images by an image reconstruction module.The method can both focus on the individual features and spatial features in the image and improve the accuracy of the purified image by using the multiscale features.Experiments show that our defense method can effectively improve the classification accuracy of the defense model to purify the original images,and at the same time can improve the defense accuracy of the model to defend against the adversarial examples.To help the deep learning models defend against adversarial examples,which currently does not have the ability to defend against adversarial examples,we design an adversarial examples purification system based on the parallel attention mechanism.The system can automatically add the adversarial examples defense capability to the defenseless network model and improve the robustness of the model at the cost of little classification accuracy.
Keywords/Search Tags:Adversarial Example, Generative Adversarial Network, Image Classification, Deep Learning
PDF Full Text Request
Related items