Nowadays,the research achievements of artificial intelligence technology in computer vision,natural language processing and other fields have been imperceptibly integrated into People’s Daily life.Along with the depth increasing accuracy of neural network,the depth of the neural network security has caused people’s attention,many researchers have proved the vulnerability of deep neural network,and found that adding adversarial examples generated by small disturbances to the sample will make the model classification error,and even make the sample be classified as the specific target designated by the attacker,bringing great security risks to the deep neural network.At the same time,the emergence of adversarial samples is also an opportunity for deep learning.By studying high-quality adversarial samples,the network robustness can be enhanced,defense methods can be found,and the security of artificial intelligence can be strengthened.However,the adversarial examples generated by the existing attack algorithms generally have the problems of low transferability and large disturbance.In order to solve this problem,this thesis proposes two adversarial attack enhancement algorithms,which can be combined with the existing gradient-based adversarial attack method to improve the transferability of adversarial examples and reduce the disturbance amount.The main contributions of this thesis are:1.We use the model visualization method Grad-CAM to visualize different models,analyze the reasons for the low transferability of adversarial examples,and explore the methods to reduce the disturbance amount of adversarial examples.2.Aiming at the reasons of low transferability of adversarial examples and analyzing the defects of input diversity,this thesis proposes a data augmentation fusion strategy to improve the transferability of adversarial examples.3.We applied the model focused weight pair to divide Attack regions extracted from GradCam,and combined with the data augmentation fusion strategy,proposed GA-Attack(GradCam Augmentation Attack),a data augmentation fusion algorithm based on the region of concern in the model.The disturbance quantity is reduced on the basis of the enhancement of adversarial examples transferability.4.In this thesis,a data augmentation fusion algorithm named GAW-Attack(Grad-cam Augmentation Weight Attack)based on regularization of the region of model concern is proposed with reference to regularization.The weight of concern extracted from Grad-Cam was added to the objective function of generating adversarial examples as regularization term.In this study,we conducted a series of comparative experiments on the two proposed lifting algorithms in public data sets,including single model attack and ensemble model attack,and compared the transferability and disturbance of the lifting algorithm with the antagonistic samples generated by the original algorithm,proving the effectiveness of the two algorithms proposed in this thesis.The optimal parameter setting and the applicable scenarios of the two algorithms are given through experiments. |