| Deep neural networks are developing from basic research to practical commercial applications,such as face recognition,license plate recognition,speech recognition,text translation,gesture recognition,object detection,etc.However,deep neural networks are susceptible to adversarial examples,which allow the model to misclassify by adding simple perturbations to the input image.This phenomenon has attracted the attention of researchers in both academia and industry,and many studies discuss novel attack methods to compromise deep neural network classifiers.The increase of these attack methods has prompted researchers to explore defense methods to mitigate these attack methods,that is,defense strategies against adversarial examples.Defense against adversarial examples improves the reliability of deep neural networks in different applications.Attacks and defenses counter each other,and one decreases while the other grows.This thesis focuses on adversarial examples for learned models.It mainly involves research on iterative training attack technology based on convolutional neural network,adversarial patch generation technology based on multilayer perceptron,and adversarial example generation technology based on Transformer.First,for the defense strategy method of adversarial training,an iterative training attack method based on convolutional neural network is proposed.The attack of adversarial samples based on gradient iteration and the method of generating adversarial samples by adversarial networks are studied,and the most perturbed loss function and convolution generation network structure are improved and designed.Under the original training model attack method,a work search algorithm for batch iterative training is proposed.To analyze the problem that the current local solution cannot be trained and updated to the global optimal solution,a random restart algorithm is recommended.In the attack experiments of MNIST and CIFAR10 data sets,the attack success rate exceeds similar methods by 4.02% and 24.12%.Then,a multi-layer perceptron-based adversarial patch generation method is proposed for defense methods transformed by input reconstructors.Adversarial patching refers to an attack method that makes a small amount of modification to the image to make the machine learning model produce incorrect classification results.In this thesis,the adversarial patch is realized by the generation technology of the adversarial sample.This thesis mainly studies the influence of the adversarial patch on the input reconstructor,and improves the algorithm and loss function of generating the adversarial patch.Through the structure of the multi-layer perceptron,the adversarial perturbation information is extracted from the input image features,and the input of the adversarial patch effectively avoids the conversion of the input reconstructor.Finally,for the defense method of adversarial detection,a method of adversarial sample generation based on Transformer structure is proposed,and the Transformer structure of adversarial example generation and image input position embedding method task are designed with reference to the Transformer structure in visual classification.A Transformer-based adversarial example attack algorithm is designed for adversarial detection defense,and the effectiveness of the method is evaluated through ablation attack experiments and adversarial detection defense model experiments,ADT’s attack success rate reached 97.4% in the attack experiment of Safety Net’s defense model. |