In recent years,due to the wide application of deep neural networks in various fields,there have been extensive discussions about whether they can be safely deployed in the real world,and the most concerned one is the impact of adversarial attacks on them.Existing attack methods are mainly divided into white-box and black-box attack.White-box attack refers to the ability to obtain all the information of the model.Black-box attack means that the relevant information of the model cannot be obtained,and the adversarial samples can only be generated by substituting the model or other methods.In order to accelerate the safe deployment of deep learning models in real life,it is necessary and urgent to study the common blind spots(the blind spot refers to the failure of the model to successfully defend against adversarial examples)of models.After investigating the research status of related topics at home and abroad,this thesis puts forward the following conclusions and questions.Existing white-box attacks are difficult to apply in real-world scenarios,because the owner of the model usually does not share the parameters of the model with the training set in consideration of security and privacy issues.Transfer-based black-box attacks can be well applied in the real world,but the success rate of the attack largely depends on the similarity between the surrogate model and the target model.Aiming at the above problems,this thesis first starts from the perspective of improving the success rate of migration-based black-box attacks,and pioneeringly enhances the attacker’s exploration scope of the adversarial subspace.Second,after thinking about how to accomplish more convenient attacks in the real world,we propose a brand new attack method under the threat model setting of box-free attacks.The main work of this thesis is as follows:1)For transfer-based black-box attacks,due to restrictions on the degree of image modification,the attacker’s search range for the adversarial subspace is also restricted,resulting in the omission of important information.In this thesis,in the process of generating adversarial examples,the overfitting effect of adversarial examples on the white box model is alleviated by reducing the high-frequency components of the image.Second,expand the search range of the adversarial subspace to capture more information to improve the transferability of adversarial samples.Second,we expand the search range of the adversarial subspace to capture more information to improve the transferability of adversarial examples.2)For a more practical no-box attack,subject to the key role of high-frequency components in the model’s extraction of low-level features,this thesis carefully designs an adversarial generation block with regional homogeneity,density,and repeatability.The high-frequency components extracted from the adversarial generation block are combined with the low-frequency components of the original image to generate adversarial examples.Compared with the previous method under the no-box setting,this method does not require training to generate adversarial examples to attack the model without relaxing the no-box constraint.3)The effectiveness of the method proposed in this thesis is proved by extensive experiments on datasets such as Image Net.Compared with the existing methods,the method proposed in this thesis can significantly improve the attack success rate.It is hoped that the method in this thesis can provide a reference for improving the efficiency and effectiveness of adversarial attacks. |