Font Size: a A A

Research Of Trojan Attack On Deep Generative Models

Posted on:2021-03-20Degree:MasterType:Thesis
Country:ChinaCandidate:S H DingFull Text:PDF
GTID:2428330647451041Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Recently,the deep generative models(DGMs)have shown outstanding performance in the fields of high dimensional data generation,data domain transformation,etc.Unlike deep models for classification tasks,these generative models are designed to learn inherent distributions of input data during training and leverage them to generate desired output data.Deep generative models(DGMs)have empowered unprecedented innovations in many application scenarios.However,their security has not been thoroughly assessed when deploying such models in practice,especially in those missioncritical tasks like autonomous driving.Lack of understanding of potential security risks can lead to serious consequences.Therefore,it is necessary to study the security of deep generative model in depth.In this work,we study the security of the DGM used in practical application scene,and draw attention to a new attack surface of the deep generative model,data used in the training phase.We demonstrate that the data poisoning can inject the backdoor to the DGM,which can cause the model to perform drive-by actions under certain conditions.For example,the poisoned model stealthily changes the speed limit sign with a specific appearance while removing the rain drops from images captured by the camera of a self-driving vehicle.To understand the feasibility and impact of launching such an attack,we conduct a comprehensive study in the mission-critical scenario of autonomous driving.Our study shows that launching our Trojan attack is feasible on different DGM categories designed for the autonomous driving scenario,and existing defense methods cannot defeat it effectively.In the end,we propose some defense strategies inspiring future explorations.The main work of this paper is as follows:1.We propose the basic and enhanced trigger for Trojan attacks,and design the data poisoning procedure for them according to the learning characteristics of DGMs.2.The Trojan attacks are implemented on six representative DGMs in autonomous driving scenario.And we conduct a comprehensive study to evaluate the effectiveness of the injected malicious by-product and its influence on the model's main task.3.Considering that the poisoned data may be checked in the training progress,two concealing strategies are proposed for the Trojan attack to make the data poisoning hard to be detected by the human data inspectors.4.We investigate current state-of-the-art defenses and evaluate the proposed attacks with them.The results show that current defense methods designed for deep classification models are not effective when facing with our attack.Thus,we also propose two countermeasures which can improve the difficulty of launching Trojan attacks as preliminary defense studies.
Keywords/Search Tags:Deep Generative Models, Trojan Attacks, Autonomous Driving, Data Poisoning
PDF Full Text Request
Related items