Font Size: a A A

Robustness Of Deep Learning Under Adversarial Environment

Posted on:2019-03-07Degree:MasterType:Thesis
Country:ChinaCandidate:Z LinFull Text:PDF
GTID:2428330566487569Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Although deep learning has achieved excellent performance in many applications,some studies have shown that deep learning are vulnerable under adversarial environments in which exist an attacker who manipulates sample on purpose in order to mislead decisions of a classifier.As traditional deep learning methods does not consider adversarial attacks in learning,its performance may drop dramatically.Our study focuses on stack autoencoder which is one of the commonly used deep neural network models.We first study whether and how the existing evasion attack algorithms downgrade stacked autoencoder.A sensitivity-based robust learning method that minimizes training error as well as sensitivity is proposed for the stack autoencoder.Sensitivity is defined as the change of model output with small change in the input.The smaller sensitivity indicates the more stable the model.Both classification error and sensitivity are minimized in training in order to build robust classifier.Our proposed method is more robust than the traditional ones under evasion attack.In the experiment,this study compares our model with the traditional stacked autoencoder and stacked denoising autoencoder in terms of accuracy,robustness and time complexity.In addition,the sensitivity-based training model is also applied to convolutional neural networks.Preliminary experiment shows that sensitivity improves the robustness of convolutional neural networks as well.
Keywords/Search Tags:Adversarial environment, Deep learning, Robust learning, Evasion attack, Sensitivity
PDF Full Text Request
Related items