| Deep neural networks are leading the way to a new wave of artificial intelligence,with remarkable achievements in autonomous driving,medical image analysis,image recognition and smart manufacturing.Deep neural networks for image processing are widely used in smart manufacturing.However,the networks are vulnerable to adversarial attacks.Attackers can target and modify images by adding imperceptible perturbations to attack the classification network,which will cause damage to the device as well as waste of resources.The existence of adversarial attacks greatly affects the security of deep neural network applications and poses a significant security risk to social production and people’s lifestyles.Therefore,it is important to study the defense methods of adversarial attacks.In response to the impact of adversarial attacks,researchers have proposed various methods to defend against them.At this stage,the defense methods are mainly divided into three categories:(1)pre-processing of the adversarial example;(2)enhancing the robustness of the deep neural networks;(3)detecting the adversarial example.Firstly,the data pre-processing method is fast and does not require retraining the neural network.However,it causes the loss of key information when processing the adversarial examples,which results in the classification network extracting the wrong feature regions and makes the network make wrong judgments.Secondly,the defense method to enhance the robustness of deep neural networks improves the robustness of the network by increasing the randomness and cognitive properties of the network model,but this method will increases the complexity of the network and requires retraining the network model,which more expensive and still ineffective in the face of well-designed,neverbefore-seen adversarial attack methods.Finally,detection of adversarial examples distinguishes clean examples from adversarial examples through a threshold strategy,is computationally inexpensive and does not require changing or retraining the neural network.We propose several defense methods to enhance the robustness of deep neural networks to address the shortcomings of the first and second type of defense methods.The main research work of this paper is as follows:(1)Dara pre-processing method can effectively defend against adversarial attacks,this method has achieved better defense effect in the face of adversarial attacks.However,while eliminating the adversarial perturbation,it causes the loss of information in the key regions,which leads to the deep neural network not extracting features correctly and thus outputting wrong classification results.To address this problem,we proposes a defense method based on data pre-processing.Firstly,the added perturbations in the adversarial examples are remove using image denoising to reduce the influence of perturbations;secondly,the superresolution reconstruction technique is used to recover the key information of the image to make up for the information lost during image restoration.The features of the adversarial examples are remapped back to the space of clean images,so that the classification network finally outputs the correct results.This method does not require extensive network training,can effectively recover key information of the image and improve the robustness of the deep neural network model without affecting the performance of clean images compared with other denoising methods.(2)Data pre-processing method has made some progress in defending against the adversarial example attack,but this method directly processes the input image,if not processed properly,the image will be distorted and aberration,which will affect the extraction of image features by the deep neural network,making the deep neural network exhibit poor robustness.Therefore,in order to ensure the naturalness of the input image and preserve the texture details of the image,it is proposed to defend against the attack of adversarial examples by enhancing the robustness of the neural network.The Vision Transformer(Vi T)model currently has a wide range of applications in several fields,Therefore,the Vi T model will be used as the research.The Res Net-Squeeze Excitation-Vision Transformer(Res Net-SE-Vi T)defense method is proposed to enhance the robustness of the Vi T model by introducing the Res Net-SE module in the Vi T model which acts on the Attention module of the Vi T model.(3)Based on the above research of Vi T model robustness,the Adversarial training Selective Kernel-Vision Transformer(ASK-Vi T)model was proposed in order to further improve the robustness of Vi T model.On the one hand,the SK module enables neurons to select the size of the receptive field adaptively according to the input multiscale information when extracting features;on the other hand,the Vi T model is trained adversarially using a dataset consisting of clean and adversarial examples.(4)Adversarial training as one of the most promising defense methods to improve the robustness of deep neural networks,requires adding adversarial examples to the training set for training in the face of newly emerged adversarial examples.This method of improving the robustness of deep neural networks by retraining suffers from long training time and poor generalization ability.To address this problem,a defense method for adversarial training based on metalearning is proposed,which applies meta-learning to the process of adversarial training and makes use of the stronger generalization ability of meta-learning to better solve the problems existing in adversarial training.(5)Four different defense methods are proposed to defense the impact of the attack.In order to study the performance of the proposed defense methods in real scenarios,finally the performance of individual defense methods and after combining them are compared. |