Font Size: a A A

Research On Defense Methods Of Image Adversarial Examples

Posted on:2020-08-13Degree:MasterType:Thesis
Country:ChinaCandidate:W Q FanFull Text:PDF
GTID:2428330599964894Subject:Circuits and Systems
Abstract/Summary:PDF Full Text Request
In recent years,convolutional neural networks(CNN)have achieved remarkable results in various fields,such as computer vision,speech processing,and natural language processing.However,Christian Szegedy et al.discovered that CNN has defects in the field of image classification.Although CNN has high classification accuracy,CNN is highly vulnerable to attack by adversarial examples and misclassified.Therefore,adversarial examples pose potential serious threats for applications deployed in security-sensitive scenarios.How to defend against attacks on adversarial examples is a challenging problem.According to the characteristics of the adversarial examples,this paper proposes three defense models for the adversarial examples.The specific research results are as follows:1.A defense method based on joint detector for adversarial example detection is proposed.The joint detector consists of a statistical detector and a Gaussian noise detector.The statistical detector is modeled based on the Subtractive Pixel Adjacency Matrix(SPAM).When a certain amount of adversarial perturbations is added to the image,it will cause anomalies in the statistical characteristics of adjacent pixels.Therefore,statistical anomalies can be used to detect adversarial example with large perturbations.Specifically,the second-order MarKov transition probability of the domain pixel difference matrix in eight directions(?,?,?,?,?,?,?,?)in three color channels(R,G,B)is calculated first.Matrix,and the four horizontal and vertical directions in each color channel are merged with the transition probability matrix of four directions of diagonal and anti-angle,respectively,and then the features in the three color channels are merged to form the final analysis feature.Finally,Use an integrated classifier as a feature training and testing tool.The Gaussian noise injection detector detects the adversarial examples with small perturbations based on the distance between the original input samples and the output of the samples after Gaussian noise injection in the targeted network.Specifically,if the distance is greater than a preset threshold,the input sample will be determined to be adversarial example,and vice versa.The proposed joint detector is adaptive,and the different samples of the adversarial example are detected by different detectors,so the joint detector can adaptively detect the adversarial examples with different characteristics.The experimental results show that the proposed joint detector can effectively detect the current mainstream adversarial examples.2.A defense method for the elimination of adversarial perturbations based on the Deep residual generative network(RGN-Defense)is proposed.The defense idea is to eliminate or mitigate the adversarial perturbations through the deep residual generative network before inputting the input sample to the targeted network for identification.In RGN-Defense,we define the joint loss function,which consists of the weighted sum of pixel loss,texture loss,and classification cross entropy loss.They evaluate the difference between the legitimate example and the generated example in terms of image content,visual perception,and final classification accuracy.Therefore,minimizing the joint loss function preserves the content of the legitimate example as much as possible,achieves a realistic visual effect,and achieves the same classification performance as the legitimate example.The experimental results show that the proposed defense method can effectively resist the attack of the current mainstream adversarial examples.3.A hybrid defense method for joint adversarial example detection and adversarial perturbations elimination is proposed.The hybrid defense method combines the adversarial examples detection and elimination of adversarial perturbations,which reduces the impact of the defense system on the legitimate example recognition rate while ensuring certain defensive performance.Specifically,the integrated detection framework first detects the input sample,and if the detection is adversarial,the RGN-Defense is used to eliminate the adversarial perturbations,and then sent to the targeted network for identification;if no adversarial is detected,the sample is a legitimate example and will be sent directly to the targeted network for identification.In summary,we propose two different defense strategies based on the characteristics of the adversarial examples,and combine these two defense methods to form a more powerful and robust defense system.
Keywords/Search Tags:Convolutional neural networks, Adversarial examples, Joint detector, Deep residual generative networks, Hybrid defense system
PDF Full Text Request
Related items