Font Size: a A A

Research On Defending Against Adversarial Attacks Based On Convolutional Neural Networks

Posted on:2022-01-12Degree:MasterType:Thesis
Country:ChinaCandidate:F WangFull Text:PDF
GTID:2518306554470944Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Although deep neural networks have achieved remarkable success in a wide range of applications,current studies demonstrate that they are vulnerable to the maliciously perturbed inputs called adversarial examples.As a response to such a threat,this work focus on studying the adversarial training and pre-processing defense strategies.Adversarial training,which utilizes adversarial examples to deep neural networks,is arguably an effective but time-consuming way to defend against adversarial attacks.The adversarially trained deep neural networks can withstand strong adversarial attacks,but generating adversarial examples has significantly raised the computational cost.Meanwhile,this work takes a closer look at existing adversarial training strategies and finds that the redundant adversarial iterations can also cause catastrophic forgetting,which spoils the trained model's robustness.In order to address this issue and speed up adversarial training,we propose the Dynamic Efficient Adversarial Training(DEAT).Under this paradigm,the training begins at standard training,and the adversarial attack is activated from the second round with an increasing number of iterations.DEAT performs well in our preliminary tests,but the manual designed adjustments strongly rely on the task-related prior knowledge,limiting their application.Therefore,we theoretically reveal the connection of the local Lipschitz constant of a given network and the magnitude of its partial derivative towards adversarial examples.Supported by this theoretical finding,we utilize the gradient's magnitude to quantify the effectiveness of adversarial training and determine the timing to adjust the training procedure.This magnitude-based strategy is computationally efficient and easy to implement.It is especially suited for DEAT and can also be transplanted into a wide range of adversarial training methods to boost efficiency.Comprehensive experiments have been done to demonstrate that all magnitude-guided adversarial training methods achieve comparable or even better classification accuracy on natural examples and adversarial evaluations than those baseline methods with significantly reduced training time-consumption.Our post-investigation suggests that maintaining the quality of the training adversarial examples at a certain level is essential to achieve efficient adversarial training,which may shed some light on future studies.For the K-means pre-processing defense,we first analyze the maximum radio of the adversarial perturbation that the K-means reconstruction can process and the impact of the number of clusters on the protective effect.We find that,rather than the semantic information contained in adversarial examples,the actual magnitude of injected perturbation can jeopardize pre-processing defenses,and K-means clustering with fewer clusters can provide more substantial protection and would slightly harm the model's standard accuracy.Combining with the other three pre-processing defenses,i.e.,median smoothing,feature squeezing and JPEG compression,we further propose an ensemble defense strategy,which boosts the defensive effect.We also identify that the order of defense methods is critical to their final performance,where K-means pre-processing should be applied as the first defense.
Keywords/Search Tags:Deep learning, Adversarial example, Adversarial training, Lipschitz continuity, Pre-processing defense
PDF Full Text Request
Related items