Font Size: a A A

Research On Defense Method Of Adversarial Examples Based On Style Transfer

Posted on:2021-02-13Degree:MasterType:Thesis
Country:ChinaCandidate:S Q JiFull Text:PDF
GTID:2428330611499421Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,as a very major branch of machine learning,deep learning technology has been rapidly developed and widely concerned at home and abroad,its application fields are also more and more extensive,such as image classification,object detection,automatic driving technology and so on.However,recent studies have shown that image recognition systems based on deep learning are vulnerable to malicious attacks,and the attackers can force the deep learning model to misclassify images by adding imperceptible modifications to the images.The image after adding disturbance is called adversarial sample.In order to improve the security of the image recognition system,it is urgent to study the defense of adversarial samples.At present,many researchers try to enhance the robustness of the image classification model by studying the adversarial samples defense.Robustness has become a key characteristic to guarantee the reliability of the deep learning model.However,the current research on defense methods mainly focuses on improving the classification model.The defense effect is limited and the defense capability is not complete.This paper studies how to improve the robustness of the model by transforming the input before the input of the image classification system,so as to resist the adversarial attack.In this paper,the proposed robust image classification model with shape and contour preference is first investigated experimentally,and it is found that it is not robust enough to resist adversarial disturbance,or even completely loses the defense ability.This paper proposes an adversarial sample defense method bas ed on style transfer through experiments,designs and implements the defense model,and determines the model parameters through experiments.Secondly,four common adversarial sample attack algorithms such as FGSM,BIM,PGD and MIM were used to attack the image recognition model.The defense effect experiment is carried out on the adversarial samples generated by four attack algorithms,and the defense effect of the defense model based on style transfer under different parameters is studied,which is compared with the defense effect of other defense methods based on input transformation.Finally,in order to further analyze and study the defense methods of adversarial samples,this paper builds an experimental platform for defense of adversarial samples,which integrates the functions of defense against sample attacks and defense effect demonstration.Then,the function of the platform was tested.The experiments show that the defense model based on style transfer can effectively defense adversarial samples,and the defense capability is better than other defense methods based on input transformation.
Keywords/Search Tags:Deep Learning, Information Security, Adversarial Example, Style Transfer
PDF Full Text Request
Related items