In recent years,the research and application of deep neural networks have made great progress.However,adversarial examples always threaten the security of neural network models.No satisfactory results have been obtained on the existence principle of adversarial examples and their defenses.With the deepening of the attack and defense game of deep neural network models year by year,after various tests,the mainstream way to obtain an adversarial robust model is adversarial training.Enhancing the effect of adversarial training by fusing multiple kinds of information is a hot research direction.Therefore,this paper focuses on the adversarial training process of robust models of deep neural networks.The specific content is divided into two aspects.One is to improve the confrontational robustness of the neural network model by improving the process of confrontational training;the other is to combine the compression of the confrontational robust model and the acceleration of confrontational training to reduce confrontation.Robustness of neural network models for the purpose of space-time complexity.The work on the adversarial robustness of neural network models is mainly to improve the model’s robustness against white-box attacks.This paper is based on the assumption that the neural network model has a decision boundary in a high-dimensional space.This assumption believes that increasing the distance of the decision boundary can improve the adversarial robustness.Therefore,this paper combines the feature orthogonalization of the output layer and the feature orthogonalization of the latent space as the optimization goal to increase the distance of the neural network classification model in the high-dimensional space decision boundary.The output layer uses the maximum Mahalanobis distance as the optimization target for feature orthogonalization.The hidden layer features are optimized using the nearest class mean clustering to reduce the intersection area of the hidden layer features in the decision surface of the latent space.Experimental results show that the proposed method performs well in mainstream adversarial robustness tests and has a flatter accuracy curve in stronger adversarial robustness tests.In terms of adversarial robust model compression of neural networks,the main work is to improve the speed of adversarial robust model compression.In order to make the compressed model with adversarial robustness,it is necessary to use adversarial training in the compression process.Since the mainstream adversarial robust model uses Projected Gradient Descent(PGD)as the training scheme,it will bring a higher time.cost.In order to alleviate this problem,this paper combines the adversarial training acceleration scheme of gradient reuse and the adversarial robust model compression scheme of parameter pruning to accelerate the training process of the model.In this paper,the training method based on fast gradient notation(FGSM)is used for the model compression process.The experimental results show that the method proposed in this paper can achieve a 70% acceleration effect under the premise that the loss of model accuracy and adversarial robustness accuracy is controllable(less than 5%);and at 50%acceleration,the model’s accuracy and adversarial robustness loss are negligible.Finally,the factors affecting the robustness of the compression model are explored,and it is found that the model is difficult to take into account the robustness and compression rate in the two cases of too small model capacity and weak adversarial training intensity.Figure [14] Table [9] Reference [82]... |