| In recent years,deep neural networks(DNNs)have achieved great success in the field of computer vision.However,some researches have shown that DNNs are vulnerable to adversarial examples,so that the ability of DNNs to identify,detect,and segment will fail when attacked by adversarial example.The adversarial example is formed by adding well-designed tiny noise to the real example.The noise contained in the adversarial example can be ignored by the human,but could fools DNNs successfully.Therefore,the adversarial example has raised the security concerns of academia and industry for the practical application of deep neural networks.The research of adversarial example could be divided into two sub-categories,namely attack and defense.The attack aims to design a more powerful algorithm of the generation of adversarial example to improve the attack ability.The defense aims to develop better defense methods or more robust models to help the DNNs resist the attack from adversarial example.The improvement of one could promote the performance of the other.People can better understand the adversarial example and defend adversarial attack from the progress of the research of attack and defense.This paper focuses on adversarial attack and study the family of the iterative fast gradient sign method(I-FGSM).The adversarial attacks are divided into white-box attacks and black-box attacks.I-FGSM can currently achieve a white box attack success rate close to 100%,but its black box attack performance is seriously insufficient.The practical meaning of black box attacks is greater than white box attacks,because black box attacks mean that you do not need to know the structure and parameters of the target model when applying attack.This paper focuses on analyzing the reason for the poor performance of black-box attack of I-FGSM,and proposes a corresponding solution called the mini-batch gradient fast gradient sign method(Mb-MI-FGSM).Through our experiments,we can find that the Mb-MI-FGSM attack algorithm greatly improves the black-box attack performance while maintaining the white-box attack success rate close to 100%.The contributions of this paper could be summarized as follow:(1)We introduce the concept of batch-size into the iterative update method such as I-FGSM which could be applied to generate adversarial example;(2)We introduce a powerful attack algorithm called the Mini-batch Momentum Iterative Fast Gradient Sign Method(Mb-MI-FGSM),in which we accumulate the gradients of the loss function w.r.t each example of a mini-batch input set to produce the better gradient direction for update of adversarial example and escape from the local minimum;(3)We study the self-ensemble behavior of Mb-MI-FGSM and get one conclusion that the combine of parallel randomization layer and one single model could reach the same(even better)performance,compared with the ensemble attack algorithm of multiple models. |