Font Size: a A A

Research On Adversarial Robust Pruning Method Of Neural Network

Posted on:2021-01-26Degree:MasterType:Thesis
Country:ChinaCandidate:H F XiaFull Text:PDF
GTID:2428330647452381Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
While deep learning has grown rapidly in recent years,it still faces a number of challenges.On the one hand,because current convolutional neural networks generally suffer from problems such as the complexity of the model structure and the redundancy of parameters,they are difficult to be deployed to mobile devices with limited hardware resources and tight power budgets.On the other hand,the robustness of the model is severely challenged due to the inherent vulnerability of convolutional neural networks,which are susceptible to be attacked by adversarial examples.This paper focuses on these two areas of research,which are summarized as follows:(1)A global dynamic model pruning method based on the BN(Batch Normalization)layer is proposed,and the overall process of the method is divided into two steps: sparse model training and model reconstruction.During the sparse model training phase,a soft pruning operation is performed and two appropriate pruning criteria are defined based on the scaling factors of the BN layer to evaluated the importance of model channels.The BN layer scaling factors and shifting factors corresponding to unimportant channels are set to zero during pruning,and these zeroed parameters can still participate in the subsequent iterative updates.By dynamically selecting redundant channels through model training,a sparse model can be obtained when the model gets converged.During the model reconstruction phase,based on the obtained sparse model,the channels corresponding to the sparse parameters in the BN layer are completely removed by hard pruning to obtain a compact pruned model.Experimental results of CIFAR-10 and CIFAR-100 datasets demonstrate the effectiveness of our method,which can achieve accuracy close to or even exceed the original model when pruning nearly half of the parameters and FLOPs on Res Net56 and Res Net110.(2)An adversarial robust model pruning method is proposed,and the overall process of the method is divided into two steps: adversarial pre-training and iterative pruning.During the adversarial pre-training phase,we first improves on the current classical adversarial training algorithm to propose a new adversarial training method,and then pre-trains the model by the improved method to obtain a robustness model.During the iterative pruning phase,based on the obtained robustness model,a global sorting pruning operation is performed based on the absolute values of the weight parameters,and then fine-tuning through adversarial training.This method enhances the adversarial robustness of the model by adversarial training and prunes the redundant parameters in the model by weight pruning.Achieving a higher pruning rate while ensuring that the model's adversarial robustness is not compromised.Experimental results on the MNIST and CIFAR-10 datasets demonstrate the effectiveness of the method,which still has good robustness when remove 90% parameters of the model.
Keywords/Search Tags:Convolutional neural networks, Model compression, Model pruning, Adversarial example, Adversarial training
PDF Full Text Request
Related items