Font Size: a A A

Research On Adversarial Sample Generation Method Based On Gradient Masking

Posted on:2022-02-05Degree:MasterType:Thesis
Country:ChinaCandidate:W X HuFull Text:PDF
GTID:2518306491466264Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Deep neural networks have been widely used in various fields,but the existence of adversarial example poses a great security risk to the application of neural networks.Various attack methods have been proposed,yet still have some shortcomings.For example,many gradient-based attack methods,although efficiency and success rate of these methods is very high,but the generated adversarial example relative to the original image is a global disturbance,so the change is very big.As for some optimization-based attack methods or some methods that only modify several pixels can improve the range of alterative of images,but the efficiency is low due to the need of carry out a lot of calculation,and the success rate is not high.Finally,many of the existing attack methods have a low success rate against black box model.This paper mainly aimed at the shortcomings of the above problems and carried out research on the generation of image adversarial examples.The main contributions are as follows:Gradient Shielding-Based Adversarial Example Generation Algorithm: To address the shortcomings of existing gradient-based attack methods and to reduce the perturbation of the adversarial example relative to the original example,this paper proposes two gradient-based shielding algorithms,one is region-based gradient shielding adversarial example generation algorithm,which mainly adds perturbation to key regions of the image and uses a gradient shielding matrix combined with a gradient attack method to generate the adversarial example.The other is the threshold-based gradient shielding adversarial example generation algorithm,which automatically ignores the gradients of insensitive regions by setting the gradient threshold in order to remove the selection of regions.Experiments show that these two gradient shielding-based attack methods ensure a high success rate while greatly reducing the perturbation of the adversarial example relative to the original example.Gradient Shielding Adversarial Example Generation Algorithm Combined with Multi-scale Transformation and Gradient Acceleration: Inspired by input diversity and gradient acceleration optimization algorithm,in order to improve the transferability ability of generated adversarial examples and the efficiency of generated adversarial examples,this paper also improves the attack method based on gradient shielding,combining the ideas of scale transformation and gradient acceleration,and proposes a multi-scale transformation and gradient acceleration shielding algorithm.The experimental results show that the algorithm improves the ability of the black-box attack of the adversarial examples and the efficiency of the generation is also improved.Visualization System of Region-based Gradient Shielding Adversarial Example Generation Algorithm: In order to visualize the adversarial effect of the region-based gradient shielding attack method,this paper completes a visualized attack system that can easily compare the attack results of different regions of the selected image,visualize the classification of the neural network before and after the attack in different regions and the perturbation of the adversarial example relative to the original example,so as to better understand the adversarial example against the neural network.In summary,three kinds of adversarial example generation methods for image recognition neural network models are proposed in this paper to address the shortcomings of existing adversarial attack methods.The gradient-based shielding attack methods(region-based and threshold-based)ensure a high attack success rate while substantially reducing the perturbation of the adversarial example relative to the original example;Gradient Shielding Adversarial Example Generation Algorithm Combined with Multi-scale Transformation and Gradient Acceleration improve the transferability ability and generation efficiency of adversarial example.In addition,this paper also complete a visualization attack system based on region shielding,which can intuitively show the adversarial perturbation of different region attack.
Keywords/Search Tags:Neural Network, Adversarial Examples, Gradient Shielding, Multi-scale Integration, Gradient Acceleration
PDF Full Text Request
Related items