| With the rapid development and widespread success of deep learning,it is being applied in many safety-critical environments,and has become a major force in applications from self-driving cars to monitoring and safety.Recently,however,deep neural networks have been found to be vulnerable to attack by adversarial example.Specifically,adversarial examples are carefully designed by adding tiny disturbances.In our eyes,the so-called tiny disturbances are often too small to be difficult to be detected,but they can completely fool the deep learning model and determine that the adversarial examples are the wrong type.Nowadays,there are many kinds of attack methods.In order to ensure the security of deep learning applications,the main defense methods can be classified from two aspects: model and data.In model level,they can be divided into modifying network and using additional network.In data level,it mainly refers to modifying the training process or modifying the input samples in the learning process.Through the research,it is found that after some image transformation operations,the effect of adversarial examples can be destroyed and the effect of depth network can be changed.According to the characteristic,we propose a method of adversarial example detection based on image transformation.We take the dataset images as the original images,and firstly we generate the adversarial example to the neural network recognizer,so that it can be successfully cheated by the adversarial example.Then we carry on the same image transformation operation to the original images and the adversarial examples.In this paper,we mainly use three kinds of image transformation technology,input the samples before and after transformation into the model,and get each group of prediction values before and after transformation,as the input feature vector of training SVM classifier.When we get the prediction difference from the image transformation of the adversarial example,we give it to the trained SVM classifier to help the SVM classifier detect the confrontation samples correctly.The main work of this paper include:1.In this paper,we propose a method to detect the adversarial examples based on a variety of image transformation techniques.Considering the high sensitivity of the adversarial examples to one image transformation technique,we combine a variety of image transformation techniques,and train a SVM classifier which cansuccessfully detect the adversarial examples through the extracted feature vector.2.Three typical attack algorithms are FGSM,deepfool and CW,which are used to construct the adversarial example dataset.In the task of image classification,three common data sets,MINST dataset,CIFAR-10 dataset and Imagenet dataset are used to construct the adversarial example dataset under different attack algorithms. |