Font Size: a A A

Research On The Application Of Knowledge Distillation In Deep Learning Adversarial Samples

Posted on:2021-05-04Degree:MasterType:Thesis
Country:ChinaCandidate:J MaFull Text:PDF
GTID:2438330629482931Subject:Engineering simulation calculation and statistics
Abstract/Summary:PDF Full Text Request
Deep learning,represented by deep neural networks,has recently developed rapidly and has been applied in many fields such as image recognition,speech recognition and natural language processing.However,some studies have pointed out that there are still some problems with deep learning,among which the adversarial example may cause serious security problems.It has caused widespread concern in the academic community.A deep and comprehensive discussion of adversarial examples will not only help to solve hidden safety hazards,but also improve the research of deep learning theory.It has important theoretical and practical significance.Aiming at the adversarial example of deep learning,this paper conducts in-depth theoretical research and experimental analysis of deep neural networks and adversarial examples.This paper applies knowledge distillation to defense of adversarial examples,and proposes a new defense strategy for edge computing settings.This paper also proposes a new ensemble method based on adversarial examples and knowledge distillation in the application of adversarial examples.The main work and results of this article are as follows:1.For adversarial examples problem,a defense method based on adversarial training and knowledge distillation is proposed,that is,Two-stage Adversarial Knowledge Transfer.This method uses adversarial examples as training data to conduct adversarial training.It can obtain a robust and complex teacher network,and complete the transfer of adversarial knowledge from data to model.Then,it uses the teacher network to perform knowledge distillation on the soft labels output by clean examples and adversarial examples.It can obtain a simple and robust student network,and complete the transfer of adversarial knowledge from model to model.2.For the application of adversarial examples,an ensemble method based on knowledge distillation and adversarial examples is proposed.The factors that affect the ensemble of the model are model accuracy and model diversity.Knowledge distillation can improve model accuracy but reduce the model diversity.So knowledge distillation cannot effectively improve ensemble model performance,which has not been pointed out in previous research.This paper proposes a model ensemble method based on knowledge distillation of adversarial examples.The method uses adversarial examples and clean examples for knowledge distillation based on the initially trained model,which can improve the performance of the model and reduce less model diversity.Then the method ensembles the knowledge distillation models to improve ensemble model performance.This paper makes a comprehensive summary of deep neural networks and adversarial examples of deep learning.This paper proposes a new defense method for adversarial examples and a neural network ensemble method based on adversarial examples,and verifies the effectiveness of the proposed method through experiments.Many researchers have completed important work on adversarial examples of deep learning,but there are still many problems.In the future research,the formation of adversarial examples and the application of adversarial examples may become a new research focus.
Keywords/Search Tags:deep learning, deep neural network, adversarial examples, adversarial learning, knowledge distillation
PDF Full Text Request
Related items