Font Size: a A A

Study On Robust Distillation And Pruning Methods For Defending Against Adversarial Examples

Posted on:2023-04-18Degree:MasterType:Thesis
Country:ChinaCandidate:S M LiFull Text:PDF
GTID:2568306749991079Subject:Applied Statistics
Abstract/Summary:PDF Full Text Request
In recent years,deep learning,a popular method in statistical machine learning,has achieved very good results in various fields and applications,yet deep learning algorithms and models are vulnerable to adversarial sample attacks,making their application in scenarios with very highsecurity requirements such as autonomous driving,face recognition,and smart security may cause great risks and losses.In particular,with the popularity of edge smart devices such as cameras and smart doorbells,there is an urgent need to compress deep learning models for deployment in these devices.Up to now,adversarial training is considered to be the most effective method to resist adversarial examples and enhance model robustness,but existing studies show that the robustness of deep learning models largely depends on the size and capacity of the models,and the robustness of small-capacity models deployed in edge devices is very much lacking.To address these issues,the research work and results of this paper are as follows:(1)To improve the robustness of lightweight networks,a robust distillation method based on robust soft labels is proposed by filtering out those soft labels that are rich in robust information for participating in the training of the model.Experimental results on both CIFAR-10 and CIFAR-100 datasets show that this method outperforms traditional adversarial training as well as robust distillation methods in terms of robustness accuracy with only a small sacrifice of clean accuracy.(2)To further deploy the model on resource-constrained end-side devices,it is necessary to further compress the model and ensure its robustness.We propose a joint framework of pruning and robust distillation,where the information of the soft labels output from each layer is used to guide the proportion of pruning in each layer while obtaining a model structure that is as robust as possible during the training process,and this approach is implemented iteratively.Experimental results on the CIFAR-10 and CIFAR-100 datasets show that the structure we obtain achieves better robustness after the same TRADES adversarial training compared to other pruning methods;meanwhile,the joint framework of pruning and robust distillation makes it more advantageous than the robust distillation method alone under different FLOPs compression budgets.In particular,the robustness of the model is better in the case of higher compression rates.The main research of this paper is on robust distillation and pruning strategies for lightweight networks in the context of deep learning to support the implementation of robust models in realistic scenarios.Also,the approach in this paper can be combined with quantization and other techniques in the future to further compress the model and improve the inference speed in downstream vision tasks while ensuring the high robustness of the model.
Keywords/Search Tags:robustness, knowledge distillation, robust distillation, pruning
PDF Full Text Request
Related items