Font Size: a A A

A Tolerance Mechanism Designed For The Recovery Of Adversarial Examples

Posted on:2020-09-18Degree:MasterType:Thesis
Country:ChinaCandidate:K JiangFull Text:PDF
GTID:2428330620460068Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the continuous increase of computing power and the accumulation of data,deep learning is becoming more and more popular,and it is widely used in the fields of automatic driving and face recognition.However,deep learning has its own flaws.An attacker can add a subtle perturbation that is difficult for the human beings to recognize,and the deep learning classifier misclassifies it.This perturbed sample is called an adversarial example.At present,a variety of defense algorithms for confronting adversarial examples have been proposed.However,single defense algorithm is often easy to fail under specific attacks,and the reliability is poor and thus it is not a good choice to practically apply.Therefore,this paper proposes a tolerance mechanism under adversarial examples,which can tolerate a strong attack to a certain extent without making a misjudgment.First of all,this paper defines tolerance rate to pre-measure the ability of the system to tolerate errors and proposes the concept of the recovery for adversarial examples.Further more,this paper designs a complete defense system under adversarial examples inspired by the concept of ensemble learning,including detection and recovery,at the same time,the tolerance mechanism of the defense system is analyzed.The detection module detects whether the sample is an adversarial example,and the recovery module restores the real label corresponding to the adversarial examples once it is known that the image is an adversarial example.As there are so many detection methods proposed,this article focuses on the recovery and tolerance mechanism of adversarial examples.Then,this paper proposes two recovery methods for adversarial examples: the adversarial examples recovery method based on CLEVER distance and the adversarial examples recovery method based on the second probability.Experiments show that the recovery methods proposed in this paper have a high recovery rate on the adversarial examples generated by the classic adversarial attacks.Finally,the recovery methods are combined into a recovery tolerance system for simulation.Experiments show that for the adversarial examples generated by the classic confrontation attacks,the performance of the recovery system is better than the average of the single recovery method under certain system parameters,and the smaller the system parameters,the higher the recovery rate of the system;Increasing the disturbance of the adversarial examples to simulate the scenario in which the recovery methods proposed in this paper is attacked,we found that the greater the tolerance rate,the slower the system error rate rises.
Keywords/Search Tags:deep learning, adversarial examples, tolerance mechanism, recovery algorithm
PDF Full Text Request
Related items