| Deep neural networks have excellent ability to process on complex computing problems,and have been widely used in various fields,especially in the computer vision.However,deep neural networks also face many threats.Among all threats,adversarial attack is a simple and effective attack method.This method does not need to attack the structure of the deep neural network,and just constructs adversarial example by adding tiny disturbances to the input data.The deep neural network can be "misguided" by adversarial examples and produce wrong results.Since adversarial examples are easy to implement and distinguishing adversarial examples from real examples by eyes is difficult,adversarial attacks have threatened the practical scenarios of applying deep neural networks.The study of adversarial example generation is of great significance for evaluating the robustness of deep neural models and facilitating the development of defense methods.Traditional adversarial example generation methods have problems such as high computational cost,the process of generating adversarial examples depends on the parameters or outputs of the target network,and the method of measuring the naturalness of adversarial examples is simple.It is difficult to achieve black-box adversarial attacks in real-time scenarios.Adversarial attacks based on neural networks,especially generative adversarial network,can solve these problems significantly.Aiming at the problems of traditional methods,this thesis proposes a black-box adversarial examples generation method based on variational autoencoder and generative adversarial network,and further proposes an adversarial example transferability enhancement method for strict complete black-box conditions.The main research work and innovation points of this thesis are as follows:(1)A black-box adversarial attack method LVGAtt based on VAE-GAN is proposed.This thesis studies the loss function of each stage in the training process,the ability of the encoder to map the image from the data space to the latent space vector is improved through pre-training,and then a model for generating adversarial samples is obtained through adversarial attack training.This method searches for adversarial disturbances where the features of latent vector data are dense,which improves the naturalness of adversarial examples and the stability of the GAN training process.The loss function of GAN network is optimized by WGAN-div,which further improves the stability of training.In addition,the network structure of VAE and GAN is designed based on Shuffle Net V2 and lightweight Res Net,which ensures the success rate of adversarial attacks and reduces the computational cost.Finally,an adversarial example generation method with high attack success rate and naturalness is realized,which is trained under black-box conditions,decoupled from the target network when generating adversarial examples,and has high generation efficiency.The experiments show that the adversarial examples generated by the LVGAtt method proposed in this thesis have the highest untargeted attack success rates of 98.66% and 95.56% respectively against the MNIST dataset and the CIFAR-10 dataset.The FPS reaches 6383 on the Jetson Xavier NX with limited computing resources.This method is suitable for black-box attacks in scenarios requiring high real-time performance,and has practical significance(2)For more stringent complete black-box attacks,a method for improving the transferability of adversarial attacks based on the reinforcement training of the anti-perturbation network and the ensemble network is explored.Based on the data augmentation method,combined with the local target ensemble network,the anti-perturbation network is trained through the outer layer and the adversarial attack algorithm is trained in the inner layer to obtain an anti-perturbation network that can automatically change the image to eliminate the disturbance but retain the complete semantic information.The network,the ensemble local target network and LVGAtt are combined for intensive training,which finally improves the transferability of the adversarial examples generated by LVGAtt.The experimentas show that this method can improve the transferability of adversarial examples and also effectively improve the aggressiveness of defense networks.The success rate of transfer-based attack success rate increased by 3.81%,and the success rate of attacks against different defense methods increased by 10.07% on average. |