| In recent years,deep learning has been developing in full force.It plays an important role in image classification,target detection,pedestrian recognition and other fields.Meanwhile,its robustness research(adversary example)has also become a hot topic in the field of deep learning.The vulnerability of the deep neural network model itself poses a serious threat to some critical security areas,such as driverless vehicles,face recognition,intelligent medical care,and so on.Due to the diversity and irregularity of adversarial disturbance,adversarial example denoising method has become a research hotspot in the field of computer vision at present,which promotes the development of neural network robustness research and provides a lot of research value for researchers.In order to enhance the security and robustness of deep neural networks,this paper mainly studies the defense method based on Generative Adversarial Networks(GAN).The main work includes the following two aspects:In the scenario of the misclassification of adversarial examples,a GAN-based algorithm for removing noise from adversarial examples is proposed.The method is mainly to input clean examples into the generator for auxiliary correction,and use the distribution of clean images and the discriminant results of the discriminator to train the generator,and finally obtain a defense method with better denoising effect.This method improves the denoising effect of the generator by combining reconstruction loss,adversarial loss and isomorphic loss,and it does not reduce the classification accuracy when facing clean examples,that is,it weakens the catastrophic forgetting problem.Aiming at the different information contained in images of different scales and the salient information in different images,a defense method based on multi-scale information and salient feature detection is proposed.It consists of two modules,one is the feature map matching model of different scales,and the other is the salient feature information matching model.The two are combined and added to the adversarial network,the multi-scale model mainly focuses on the clean examples and the adversarial examples on different scales;the significance information model mainly studies the difference in significance information between clean examples and adversarial samples.The two complement each other,and jointly remove the noise on the adversarial example from the perspective of saliency detection and multi-scale information,and improve the robustness of the model.We conduct related comparative experiments on the two defense methods proposed in this paper on multiple classic natural image datasets.Experimental results show that the defense method proposed based on the generative adversarial network maintains the accuracy of clean examples,and both methods improve the denoising effect of adversarial examples.Not only that,the adversarial example defense method based on multi-scale and salient feature detection has superiority in processing image details,and at the same time shows a stronger defense effect in the experiment. |