Font Size: a A A

Research On Robustness And Invisibility Of Adversarial Attacks

Posted on:2020-07-10Degree:MasterType:Thesis
Country:ChinaCandidate:G J ChenFull Text:PDF
GTID:2428330575494687Subject:System theory
Abstract/Summary:PDF Full Text Request
Adversarial attacks is a hot issue in the field of machine learning.The principle of adversarial attacks is to deceive the deep neural network to make a wrong judgment by using adversarial example(a new sample obtained by adding tiny perturbations imperceptible to the human eye after careful training).For some applications that are sensitive to the security and reliability of neural networks,it is of great significance to study the adversarial attacks techniques.With the continuous improvement of attention,adversarial attack technology research has achieved certain results in adversarial example generation,adversarial attack technology evaluation and other aspects,but there are still the following shortcomings:(1)The adversarial example is not robust enough,and it is easy to be invalid when encountering affine transformation processing attacks such as rotation,scaling,translation and oblique cutting.(2)Due to the lack of invisibility,the attack perturbations is easy to be perceived by humans.(3)The ability of adversarial example is lack of transferability,and only maintains a high attack success rate for the specified classification model.This paper focuses on the above issues,and our main contributions are summarized as follows:We proposes an adversarial example enhancement method based on spatial transformation.This method is used to improve the robustness of adversarial example to affine transformation such as rotation,scaling,translation and oblique cutting.The experimental results show that the attack success rate of the FGSM,BIM and DeepFool methods on the cifar-10 dataset is increased by 4%~14%,and the attack success rate of the GTSRB dataset is increased by 3%~4%.In order to solve the problem of insufficient invisibility of adversarial example,we propose a method of generating adversarial example,which based on the adversarial mechanism.By introducing a new constraint of invisibility evaluation,adversarial training is used to enhance the invisibility of adversarial example.The experimental results show that the perturbation rate of the proposed method in the CIFAR-10 data set is below 23%.Compared with the common attack methods such as FGSM,BIM and DeepFool,the perturbation rate is reduced by 8%~35%.The perturbation rate on the GTSRB data set is below 47%,and the reduction range is 15%~24%.Focusing on the problem of lack of transferability ability of adversarial example,an attack model generation method based on generative adversarial networks and integrating spatial transform enhancement mechanism is proposed,which realizes adaptive transferability and robustness against different input samples.The experimental results show that the success rate of white box attack and black box attack is 98% and 94% respectively in the cifar-10 data set.The success rate of white box attack and black box attack on GSTRB dataset was 91% and66%,respectively.Compared with FGSM,BIM,DeepFool and advGAN methods,the success rate of black box attack was improved to varying degrees.
Keywords/Search Tags:Deep Neural Network, Adversarial Attack, Robustness, Invisibility
PDF Full Text Request
Related items