Font Size: a A A

Research On Adversarial Example Generation Methods In Deep Neural Networks

Posted on:2022-01-31Degree:MasterType:Thesis
Country:ChinaCandidate:J B LiuFull Text:PDF
GTID:2518306491966299Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Driven by big data and hardware acceleration,the research of deep neural networks has made significant progress in the field of computer vision.However,some studies have found that deep neural networks are vulnerable to a well-designed adversarial example attack.Adversarial example is a kind of sample that the attacker adds tiny and imperceptible noise to the original sample according to some rules,which makes it easy to fool the deep neural network in the test or deployment stage and make its classification error.At present,adversarial examples have become one of the main risks of deep neural network applications.In real scenes,deep neural network is a white-box or black-box model for attackers.Compared with white-box attack,it is more difficult to improve the success rate of black-box attack.In order to make the adversarial examples have a high attack rate in the white-box state and improve the attack ability in the black-box state.In this paper,two optimization algorithms are proposed from different perspectives,as follows:(1)In order to solve the problem that the traditional gradient attack methods are lack of white-box and black-box aggressiveness,we propose a method based on scale invariant root mean square gradient sign to generate adversarial example.This method first integrates the root mean square prop of the gradient descent optimization method into the fast gradient sign method,and obtains the fast gradient sign method based on the root mean square prop.The idea is to use the cumulative square gradient to adjust the size of the current gradient update,which is conducive to accelerating the gradient update of the gentle parameter space;Secondly,the scale invariance property of deep neural network is introduced,through which different images with similar loss values can be obtained,which can replace multiple models for the integration of adversarial examples,and the root mean square gradient sign method based on scale invariance(SRMS-FGSM)is obtained.The experiment uses the Image Net data set for testing.The results show that compared with the traditional gradient methods,SRMS-FGSM not only achieves 100% attack success rate in the case of white-box,but also significantly improves the attack performance in the case of black-box.(2)In order to solve the problem of insufficient black-box attack in momentum based gradient attack method,a self integrated adversarial example generation method based on accelerating gradient is proposed.Firstly,the sum of the current gradient information and the accumulated momentum gradient of the input sample is used to construct a new gradient update value.In the process of gradient rising,the iterative update speed of the gradient can be accelerated,and the momentum gradient can also stabilize the update direction of the gradient.According to this idea of gradient optimization,an accelerated gradient sign method is proposed;Secondly,in order to further improve the aggressiveness of the adversarial examples,the idea of integration model is adopted,and the scale invariance property is incorporated into the accelerated gradient sign method,which can make the adversarial examples achieve the attack effect of multi model training in the way of self integration,and a self integration gradient sign method based on accelerated gradient(SAI-FGSM)is proposed.The experiment uses the Image Net data set for testing.The results show that,compared with the momentum based gradient attack method,SAI-FGSM greatly improves the black-box attack of the adversarial examples,and it can also effectively attack the model with adversarial defense.
Keywords/Search Tags:Adversarial examples, Fast gradient sign method, Deep neural network, White-box attack, Black-box attack
PDF Full Text Request
Related items