Font Size: a A A

Research On Adversarial Patch Example Based On Contributing Feature Region

Posted on:2022-01-11Degree:MasterType:Thesis
Country:ChinaCandidate:J M WangFull Text:PDF
GTID:2518306734987529Subject:Applied Statistics
Abstract/Summary:PDF Full Text Request
With the development of Internet technology,a large amount of image data has been produced,and image big data brings more valuable information.However,it is no longer applicable to analyze image big data by traditional statistics.In this paper,Deep neural network(DNN),which is closely related to statistics,is used for analysis.While DNN has achieved great success,its robustness and stability have attracted more and more attention.Studies have shown that DNN will be affected by carefully crafted adversarial examples.Therefore,many adversarial attack and defense algorithms have been proposed to help evaluate and improve the robustness of DNN.However,most attack and defense algorithms do not consider network interpretability.From the perspective of network interpretability,this paper looks for the Contributing Feature Region(CFR)and studies the robustness of DNN from two aspects:adversarial and defense.The main work is as follows:(1)For attack,this paper simulates the human attention mechanism to find the feature contribution region of the image,then uses the soft mask matrix for precise positioning,and finally uses the loss function with inverse temperature to search for the best CFR perturbation.Extensive experiments demonstrated that our proposed method achieves state-of-the-art performance.(2)For defense,this article puts forward a defense method CFR-CGAN(CFR-conditional generative adversarial network),a defense method based on CFR denoising,which introduces feature contribution region loss in the loss function to train CGAN.Extensive experiments demonstrated that CFR-CGAN is consistently effective against different attack methods,and the denoised image feature contribution region is almost consistent with the clean image.This paper is mainly based on the feature contribution region,using the interpretability of the network to study adversarial and defense problems,and a large number of experiments have verified the effectiveness of the method.Many scholars have made some significant progress on the robustness of deep neural networks,but there are still many problems.For example,current research cannot accurately explain the cause of adversarial examples,lack of adversarial example generation mechanism and basic theory of DNN vulnerability.In future research,the mechanism of adversarial example generation may become a new hot spot.
Keywords/Search Tags:deep learning, adversarial example, contributing feature region, adversarial patch, conditional generative adversarial network
PDF Full Text Request
Related items