Font Size: a A A

Research On Robustness Of Deep Learning Model Based On Adversarial Examples

Posted on:2024-09-10Degree:MasterType:Thesis
Country:ChinaCandidate:J L YanFull Text:PDF
GTID:2568307073450234Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
As the core technology of artificial intelligence,deep neural network shines in a wide range of applications.It not only achieves good performance in many tasks,but even surpasses human processing ability in some fields.However,studies have found that deep neural networks are vulnerable to adversarial examples,which leads to model errors,which brings great challenges to the use of deep neural networks in security-sensitive systems.Therefore,it is imperative to build a safe and reliable deep learning system in an adversarial environment.In the defense of convolutional neural network models against adversarial examples in the field of image classification,the relatively effective defense method recognized by scholars is adversarial training.However,it is found that there is a large robust generalization gap and robust overfitting problem in adversarial training compared with standard training,so how to solve these problems is the key way to further improve the adversarial robustness of adversarial training methods.Recently,inspired by the success of the Transformer model in natural language processing,researchers proposed a image classification model Vision Transformer(Vi T).The latest research on the Vi T model shows that although the Vi T model has a strong learning ability and advanced performance,the model is vulnerable to adversarial examples and the performance of the model is severely degraded.The existing research on the Vi T model focuses on the classification accuracy of normal examples and lacks the research on the robustness of adversarial examples.Therefore,the security of the Vi T model in an adversarial environment is worthy of in-depth research and exploration.In order to solve the problems of large robust generalization gap and robust overfitting of defense in adversarial training and the lack of research on the robustness of adversarial examples in Vi T model,this paper studies from the following two aspects and tries to propose specific solutions:(1)To address the large robust generalization gap and robust overfitting problem of adversarial training(AT)of convolutional neural network models in image classification domain,this paper proposes an improved adversarial training algorithm AT-AMP(Adversarial Training-Adversarial Model Perturbation).This method introduces the strategy of smoothing the weight loss of the convolutional neural network model into the adversarial training mechanism,making it easier for the optimization of the model to converge to a flatter local minimum,which largely alleviates the large robust generalization gap and robust overfitting problem of adversarial training.Experimental results show that the proposed AT-AMP approach consistently improves the defense effectiveness of adversarial training under SVHN,CIFAR-10 and CIFAR-100 three image datasets,two threat modelsL2 and L?,multiple variants of the adversarial training framework,and different black-box and white-box adversarial attacks.In addition,this paper compares the proposed AT-AMP method with two classical regularization techniques and two data augmentation techniques for improving adversarial training robust generalization gap and robust overfitting.The experimental results show that the proposed AT-AMP method achieves state-of-the-art results compared to regularization techniques and data augmentation techniques.(2)To address the lack of research on improving the robustness of Vi T models against adversarial examples in the image classification field,this paper proposes a robust Vi T model architecture Locality in Vision Transformer(Li VT).The main design idea of this model is that the main structure of the Vi T model using the TNT(Transformer i N Transformer)architecture,in order to make up for the difficulty of the FFN(Feedforward Network)module in the classic Vi T model to effectively capture the local dependencies of the image,this paper integrates the depthwise separable convolution module with local inductive bias into the feedforward network FFN module of the Vi T model,so that the model can capture the local details of the image and learn more robust classification features,thus enhancing the adversarial robustness of the model.The Li VT model proposed in this paper is tested on two image datasets of GTSRB,CIFAR-10 and CIFAR-100.Compared with other Vi T variant models,the Li VT model has achieved the best results on normal images and adversarial examples,which verifies the effectiveness of the proposed model architecture.
Keywords/Search Tags:Deep Learning, Adversarial Examples Defense, Adversarial Training, Convolutional Neural Network, Vision Transformer
PDF Full Text Request
Related items