Font Size: a A A

Research On Defense Methods Against Adversarial Attack Based On Deep Supervision And Noise Injection

Posted on:2021-05-13Degree:MasterType:Thesis
Country:ChinaCandidate:Q JiangFull Text:PDF
GTID:2428330647958916Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,deep learning technology has developed rapidly and has shown superior performance on many challenging machine learning tasks,such as image classification,natural language processing,and speech recognition.At the same time,it is changing our lifestyle,and increasingly becomes an indispensable part of life,such as autonomous driving,face recognition at high-speed railway stations,Alipay and so on.However,researchers have recently discovered that deep learning models have security risks and are easily affected by adversarial examples.The adversarial example is a sample formed by deliberately adding subtle disturbances that are invisible to the human in the data set.It can make the classification model output incorrect results with a high degree of confidence,which poses more challenges for deep learning research.Since the discovery of adversarial examples,many researchers have devoted themselves to researching the defense methods of adversarial examples and have achieved some outstanding results,but so far there is no method that can completely eliminate the threat of adversarial examples.In this context,this thesis starts from the adversarial sample problem of convolutional neural networks in deep learning,and proposes defense measures based on supervision mechanism and RBF deep neural network model based on noise injection,which can be effective to some extent.They can effectively defend against attacks from adversarial examples to a certain extent,and do not require additional training sets,reducing overhead.The main research contents and contributions of this thesis are summarized as follows:1.Utilizing four common attack algorithms,FGSM,Deep Fool,BIM,and C & W,to attack the MNIST,CIFAR-10 and Fashion-MNIST datasets,we propose three threat models for measuring and analyzing the performance of methods proposed in this thesis.2.An adversarial example defense model based on supervision mechanism is proposed.The model adds a supervising layer to the original convolutional neural network and improves the model's robustness and defense ability against adversarial examples by improving the loss function.The Le Net-5 and VGG networks are used as the original network models.The experimental results on Benchmark datasets MNIST and CIFAR-10 confirm that the method proposed in this thesis will not affect theclassification performance of the model in clean samples,and can effectively respond to the attacker's migration ability and increase the difficulty of attackers.3.An RBF deep neural network model based on noise injection is proposed.This model is based on the original neural network.By using the approximation of the RBF neural network and the recovery ability of Gaussian noise,the network model classification is more robust and the classification boundary is smoother.The experimental results on Benchmark datasets MNIST and Fashion-MNIST verify that the model proposed in this thesis can not only defend against adversarial attacks,but also be robust to the secondary attacks of attackers.
Keywords/Search Tags:deep learning, convolutional neural network, adversarial examples, adversarial examples defense
PDF Full Text Request
Related items