Font Size: a A A

Research On Adversarial Learning Methods Of Neural Networks Based On Sparse Structure

Posted on:2022-03-22Degree:MasterType:Thesis
Country:ChinaCandidate:P LiFull Text:PDF
GTID:2518306533994559Subject:Electronic information
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development and upgrading of deep learning technology,more and more AI products can be implemented in the real world.With the application of deep learning technology in security monitoring,banking,finance,automatic driving and other security sensitive tasks,the deep learning model is a black box end-to-end model,and its security and privacy have gradually attracted people's attention.Adversarial examples technology has become a hot field of deep learning research.Although the adversarial training based on Maximum-Minimum optimization has become the most effective method to defend against adversarial examples,there are two problems in adversarial training: Adversarial training requires larger model capacity than normal training,resulting in many redundant parameters in the model parameters.The internal maximum of adversarial training usually requires a lot of time to solve the optimization,resulting in the model training process consumes a lot of computing resources than normal training.This article solves these two problems from the perspective of model structure sparsity.1.For the over-parameterized robust neural network,expand from the pruning angle of model compression to directly cut out redundant model parameters.This paper firstly analyzes the lottery ticket hypothesis experiment on the robust network,and proves that the lottery hypothesis does not hold in the robust networks through some experiments.Then in order to effectively compress the robust network,this paper expands the weight distribution of the robust network and discovers that there is trade-off between the robustness of the model,accuracy of model and sparsity of the model,and the parameters in middle layer of the robust model are more sparse sensitivity than other layers.According to this feature,this paper proposes a layer-wise pruning algorithm based on sparse sensitivity to compress robust networks.Finally,the pruning algorithm of this paper can be combined with the classic network compression algorithm to make better compress.The experimental results show that the method of this paper not only effectively compress the model under the premise of ensuring robust accuracy,but also the robustness of the sparse sub-network can exceed the original network under the black box attack.2.In order to alleviate the time-consuming problem of adversarial training based on the Minimax optimization,this paper starts from the perspective of model structure sparseness to solve this problem.Through adversarial training experiments on single step adversarial training,it is found that there is a serious overfitting phenomenon in single-step adversarial training.Therefore,this paper proposes a sparse random single step adversarial training,which combines the sparsity of the model structure with the random single step adversarial attack to improve the robustness of the model.The experimental results on Mnist and Cifar-10 datasets show the effectiveness of the proposed method.Compared with other fast adversarial training methods,the proposed method also has certain advantages in accuracy.
Keywords/Search Tags:Deep Learning, Adversarial Example, Adversarial Training, Sparse Structure, Model Pruning
PDF Full Text Request
Related items