| In recent years,with the rapid development of Artificial Intelligence,its related technologies have become the focus of scholars’ research.In particular,the achievements of deep learning in many fields have brought great convenience to daily life.At the same time,the security problems behind it are gradually revealed.The deep learning model is very vulnerable to malicious attacks in the training or testing stage,such as adversarial example attacks.Adversarial examples are examples formed by adding carefully designed and imperceptible perturbation,which can make the deep learning model make wrong judgments.With the deepening of deep learning research,the adversarial example attack is also constantly updated and iterated,which poses a serious threat to the development and application of deep learning.Aiming at the security problem of deep learning,in order to reduce the perturbation in the adversarial example and improve the robustness of the model,this thesis proposes two defense methods against adversarial example attacks based on the existing defense methods.The main research work is as follows:1.Using the dimensionality reduction and compression characteristics of Nonnegative Matrix Factorization,a defensive method against adversarial examples attack based on image compression is proposed.Before the adversarial examples are input into the depth neural network model,NMF is introduced to compress the adversarial examples to reduce the perturbation in the examples and achieve the effect of defense.Through the experimental analysis of the adversarial example attack effect generated by various attack methods,the feasibility and effectiveness of this method are verified,and it is proved that this method can reduce the perturbation in the examples and achieve the effect of defending against adversarial example attacks.2.From the perspective of enriching the diversity of examples,a defensive method against adversarial example attacks based on Gaussian perturbation training is proposed.This method improves the adversarial training model of FGSM by adding Gaussian perturbation,and is used to train the model to modify the parameters of the network.Through the comparative experimental analysis with the normal training model and a variety of adversarial training models under one-step attack and iterative attack,the effectiveness of the adversarial training model is verified.The experimental results show that this method can enrich the diversity of training examples to a certain extent,improve the robustness and generalization ability of the model,improve the defense ability of the model against adversarial example attacks,and have better performance in the case of one-step attack. |