Currently,deep learning has been widely applied in the field of image processing and achieved significant results and becoming one of the core technologies in artificial intelligence applications.Through continuous exploration of the structure of deep learning models,recent researchers have discovered "adversarial examples," which reveal deep vulnerabilities in these models.By making slight changes to clean samples,the classification accuracy of the target model can be greatly reduced,achieving the goal of attacking deep learning models.This approach provides new ideas for both attacking and defending against deep learning models.First,Generative Adversarial Networks(GAN)have been widely used in adversarial sample attacks.However,the robustness of the discriminator in traditional GANs limits the effectiveness of the adversarial samples generated by GAN.To address this issue,a method is proposed to further enhance the robustness of the discriminator by adding an attacker task to it,which in turn improves the attack performance of the adversarial samples.Experiments on real datasets show that the proposed model significantly improves the success rate of adversarial sample attacks.This framework provides new ideas and methods for improving the robustness of GAN discriminators and the effectiveness of generated adversarial samples,and has the potential to play an important role in practical applications.Secondly,deep neural networks are difficult to defend against adversarial samples because they cannot recognize small perturbations in these samples.To address this issue,a defense model for adversarial samples based on attention mechanism is proposed.Specifically,a mixed attention mechanism is added to the target classification model of GAN,and an attention loss function is constructed to ensure that the attention map of the reconstructed sample is consistent with that of the clean sample.The use of GAN to restore the adversarial samples to benign samples further improves the defense capability of the model.Experiments on real datasets demonstrate the effectiveness and applicability of the proposed model,which is compared with other existing defense methods. |