Font Size: a A A

Research On Algorithm Of Adversarial Example Generation Based On Generative Adversarial Network

Posted on:2023-08-17Degree:MasterType:Thesis
Country:ChinaCandidate:Z M YangFull Text:PDF
GTID:2568306833489294Subject:Engineering
Abstract/Summary:PDF Full Text Request
Due to the high classification accuracy,deep learning models have been widely used in computer vision to achieve image classification tasks.The derived artificial intelligence security issues have also received more and more attention.The key to the security problem is the input of the model.Specific sample data will cause the model to make false predictions with high confidence.The generated adversarial examples with slight perturbations can make the deep neural networks result in false positives easily.Researching the methods of generating adversarial examples can help discover the potential dangers of deep learning models.Moreover,it can enable deep neural networks to improve the ability to prevent potential risks.The academic community has long studied the generation method of adversarial samples.Most of the existing research on the problem of generating adversarial examples still cannot take into account the accuracy of the attack and the authenticity of the generated images.Therefore,the methods of generating adversarial samples with higher quality still need to put in more research.In this paper,an adversarial example generation algorithm is proposed based on a generative adversarial network,which can generate transferable adversarial samples stably and efficiently,and it can also be used for adversarial training to improve the robustness of image classification models in practical application scenarios.The main contributions of this paper are as follows:(1)The mainstream method of maintaining the adversarial examples utilizes superimposing perturbation for the original images.Following this,we consider superimposing diverse perturbations that are not readily perceptible under a certain amount of perturbation.There are two types of optimizations for the perturbation generation method: the perturbation generation method based on convolutional autoencoder and residual block,and the other is the perturbation generation method based on denoising convolutional autoencoder.At last,serializing the two parts can generate a perturbed autoencoder,resulting in an improved noise fusion generator,which provides initial reference samples for subsequent discriminator training and generation algorithm optimization.(2)In this paper,an input discrimination algorithm is proposed based on essential domain exploration.The idea is to consider the dependencies between image pixels and use an external convolutional neural network as the backbone network to discriminate between the actual and fake samples of the input discriminator.Specifically,the use of self-attention modules as a layer of the network alternately with convolutional layers is to accurately measure the distance between the probability distribution of the generated examples and the essential probability distribution of the original data set.The spectral norm is used to normalize the feature matrices of the convolutional layer and the self-attention module feature space in order to limit the scaling of the linear function to be less than or equal to 1,thus ensuring that the output results are not affected by small changes in the input.(3)By coupling the pixel perturbation algorithm and the input discrimination algorithm,the SAdvGAN(Serial-generated Adversarial sample based on the Generative Adversarial Network)model can be obtained.Besides the need to limit the size of the adversarial perturbation and measure the prediction class distances,the loss function in the GAN training process is improved to globally estimate the probability that the input discriminator data is more realistic on average than the randomly sampled data of the opposite type on the basis of stable training.To verify the proposed model’s effectiveness,serval experiments are conducted on the public datasets MNIST,CIFAR10 and the high-definition dataset Adversarial Learning Development.The four aspects of stability and training stability prove the effectiveness of the related algorithms in this study.
Keywords/Search Tags:AI security, adversarial examples, generative adversarial networks, attack success rate, transferability
PDF Full Text Request
Related items