Font Size: a A A

Research On Adversarial Attack Based On Deep Image Classification Network

Posted on:2021-06-09Degree:MasterType:Thesis
Country:ChinaCandidate:T DengFull Text:PDF
GTID:2518306104487484Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
In recent years,deep neural networks have been widely used in human daily life,but some studies have shown that such models are not completely reliable.They are vulnerable to be attacked by adversarial examples and make decisions that violate human instincts.Research on adversarial attack can be used as a benchmark for measuring the robustness of the neural networks,and promote the establishment of a more reasonable and robust model.Other than that,it can also be used as a means to protect human privacy and curb the illegal use of deep neural networks to steal personal information.The ideal adversarial example should maintain the success rate of the attack without affecting the normal judgment of the human,that is,both concealment and attack strength.Aiming at the concealment of adversarial examples,this thesis proposed an attention based spatial transformed adversarial example(A-stadv).In this algorithm,the attention mechanism based on gradient-weighted class activation mapping is used to find an meaningful attack area and the spatial transformation is performed on the this area to achieve adversarial attack.Attention mechanism can improve the search efficiency of adversarial examples,and at the same time ensure the high concealment of the attack by filtering out disturbances from unrelated areas.In terms of the attack intensity,this thesis focused on the black-box attack,considered to develop it by improving the cross-model migration ability of adversarial samples.Therefore,a fast gradient iterative method based on mini-batch data enhancement and Nesterov gradient optimization(Mb-NI-FGSM)is proposed.In this algorithm,data enhancement is used to mitigate the over-fitting of adversarial examples to specific models,while the Nesterov gradient optimization was introduced to ensure that it can efficiently find adversarial examples under a limited number of iterative steps,thereby achieving higher black-box attack intensity.In order to verify the effectiveness of the algorithm,this thesis conducted adversarial attack experiments on the Image Net with some representative models.In the concealment experiments,it was confirmed that compared with other attack methods,A-stadv can achieve the same or even higher attack success rate with a smaller amount of disturbances under various image difference measurement standards.In the attack intensity experiment,whether it is a defenseless model or a defensive model,Mb-NI-FGSM shows a higher black-box attack strength than the current optimal algorithm.The highest black-box success rate of this algorithm is 94.6%,which is similar to the success rate of white-box attack.
Keywords/Search Tags:Deep neural network, Adversarial example, Concealment, Migration, Attack intensity
PDF Full Text Request
Related items