Font Size: a A A

Research On Defense Methods Of Adversarial Examples Based On DCT Transform

Posted on:2019-10-14Degree:MasterType:Thesis
Country:ChinaCandidate:M YanFull Text:PDF
GTID:2428330566997395Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Adversarial examples are samples that have been maliciously designed to attack a machine learning model with added disturbances or noise.They look almost exactly the same as the real samples,but the machine learning model gives erroneous results that are completely different from the real samples.This type of attack can seriously undermine the security of systems supported by deep learning models,especially for securitysensitive applications.In the work of this paper,we first show that the DCT-based image representation has certain robustness to the adversarial examples.Starting from this discovery,we propose an antagonistic sample defense model based on the DCT encoder and confrontation training.By adding the DCT coding layer in the model,and the defenders artificially producing adversarial samples to add to the training data,the model's antagonistic robustness can be effectively improved.Experiments show that compared with simply using confrontation training,the combination of the DCT encoder and the anti-training model has a defensive effect on the adversarial examples based on FGSM,BIM and PGD.In particular,the model combining DCT encoder and FGSM countermeasures has a good balance between computational efficiency and defense effectiveness.This makes it possible to use this model on large-scale datasets to defend against adversarial examples.The main contributions of this article are:1.The data sets MNIST,CIFAR-10,and ImageNet were attacked using the fast gradient symbol method,basic iteration method,and projected gradient descent method.The generalization of the countermeasure samples generated by the same algorithm in different models of the same data set was studied.2.Developed an experimental demonstration platform for countering samples.Using Django's back-end framework and vue.js front-end framework made the front-end and back-end completely separate,and generated confrontation samples and recognized the results online by directly uploading or drawing on the canvas.3.Through the defensive mechanism of training,the model is improved against the ability to resist the sample.And further proposed a method based on DCT transform against sample defense.4.The strong anti-roboticity of the anti-training model based on DCT transform is verified,and the defense effect of the countermeasures based on the fast gradient symbol method,the basic perturbation method and the projection gradient descent method is improved.An image recognition system with robustness was made to visually demonstrate the effect of the model defense against the sample.
Keywords/Search Tags:Adversarial examples, Adversarial training, DCT transform, Robustness
PDF Full Text Request
Related items