| With the rapid development of deep learning technologies,Deep Neural Networks(DNNs)have been widely used in various tasks,such as computer vision,multimedia,and natural language processing,etc.However,recent research found that such models are under the risk of adversarial examples crafted by malicious attackers.According to the adversarial examples crafted with or without accessing the target model,we can divide adversarial attacks into white-box and black-box attacks.Considering the limitations in practical scenarios(e.g.,invisible parameters and limited query counts),black-box attacks are of critical security concerns in practice.However,since attackers don’t have the knowledge of the target model,existing black-box attack methods often achieve lower attack success rates than white-box ones.Therefore,in order to improve the performance of black-box attacks,we explore how to intergrate the Mixup operation into balck-box attacks,and we propose the corresponding enhanced method for each sub-category of black-box attacks(transfer-based and query-based attacks)respectively.For the transfer-based attacks,a major drawback of these attacks is that they iteratively calculate the gradients of highly similar inputs,and thus fail to acquire diverse gradient information.To address this issue,we propose the Random-Layer Mixup Attack Method(RLMAM).Our method interpolates the adversarial examples with clean examples in both input space and hidden space.The interpolated adversarial representations induced by our random-layer Mixup can improve representations’ diversity in both two spaces and alleviate adversarial examples’ overfitting phenomenon on the source model.Furthermore,we incorporate RLMAM with our enhanced momentum method.For the query-based attacks,we intergrate Mixup into model re-training strategy.Since the training dataset provided by original strategy is limited and monotonous,we propose to enrich the dataset with new examples synthetically generated by Mixup and labeled by the target model.We demonstrate that the Mixup operation can help to generate more diverse examples around the decision boundry.Besides,in order to illustrate DNNs’ vulnerability to adversarial examples,we develop a Python adversarial attack algorithm library,which provides implementation of above-mentioned attack methods.In the library,we can generate adversarial examples with chosen source model and attack method,and check the classification results of different models to the original/adversarial examples. |