Font Size: a A A

Research On Adversarial Example Detection Based On The Maximum Channel Of Saliency Maps

Posted on:2022-12-12Degree:MasterType:Thesis
Country:ChinaCandidate:H R FuFull Text:PDF
GTID:2518306743474004Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
In recent years,image processing based on deep learning has been the core technology of image classification,object recognition,target detection among other various applications related to machine vision.Despite its popularity and excellent performance,recent research shows that image processing models based on deep neural network are vulnerable to attacks of adversarial examples.This vulnerability has raised significant concerns about their security risks in security-critical applications such as autonomous driving and smart payment.To defend against adversarial attacks,many researchers have proposed various detection methods.However,these existing detection methods only appear to be effective against specific adversarial examples and cannot generalize well to different adversarial examples.Furthermore,detection rates of these detection methods vary widely against different adversarial examples.In order to improve generalization of detection methods,we summarize the research process of defense against sample and analyze the principle and shortcomings of these existing defenses in this thesis.Furthermore,we also focus on the following work.1.According to the perturbation amount and the perturbation distribution,we classify adversarial examples into two categories,uniform-perturbation AEs(UAEs)and non-uniform perturbation AEs(N-UAEs).Furthermore,we analyze the reasons why these existing defenses shows their tendentiousness on the two types of adversarial examples.2.Based on the motivation of improving the poor generalization of existing detection methods,we propose a new detection method against AEs based on the maximum channel of saliency maps(MCSM).The proposed method detects adversarial examples by constructing maximum channel saliency maps of the input data.We conduct control experiments and ablation experiments on AEs generated by six prominent adversarial attacks.The experimental results show that the proposed method can achieve well-balanced high detection rates on both the two types of AEs.3.For the tendentiousness of MCSM method on N-UAEs,we propose the fusion detection using superimposition.And to overcome the shortcomings of the superimposing fusion method,we propose the detection based on the blending image with saliency maps(B-IWS).The method uses the maximum channel saliency map blending statistical features of original images to detect adversarial examples and the average detection rate of this method is higher compared with the MCSM method.Meanwhile,the results of robustness experiments on B-IWS demonstrate that this method is not affected by the accuracy of the protected model and it can effectively defend against invalid perturbation adversarial examples.
Keywords/Search Tags:Deep learning, Adversarial example, Adversarial detection, Saliency map
PDF Full Text Request
Related items