Font Size: a A A

Research On Adversarial Examples Of Deep Learning

Posted on:2022-12-19Degree:MasterType:Thesis
Country:ChinaCandidate:S K ZhouFull Text:PDF
GTID:2518306776492724Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
The development of deep learning provides excellent solutions for a lot of research.With the arrival of neural networks,people have higher requirements on the robustness and security of deep models.However,research shows adversarial examples which generated by adding perturbations to original data can mislead the deep models easily.Studying adversarial examples would deepen our understanding of neural networks and promote the defense and improvement of models.In consequence,it is worthwile to study the adversarial examples with stronger attack performance and develop new attack methods.This paper conduct research on adversarial examples of deep learning,its main contributions are as follows:1.Based on generative adversarial networks,an generation algorithm of adversarial examples based on disturbed features is proposed.In this paper,a new generation algorithm of adversarial examples is designed,which uses generative adversarial network to extract features from the original image.Non-robust features are paid attention to,and noise interference is added into the adversarial example generation process for reconstruction.The positive role of checkerboard effect caused by deconvolution in the process of adversarial example generation is analyzed.We make advantage of the least square method in LSGAN to solve the problem of unstable training process.Experiments demonstrate that this method could achieve a higher attack success rate.2.Based on false positive objects,a new adversarial attack method of object detection is proposed: No longer limited to adversarial examples of object detection are always using true positive instances to reduce the mean Average Precision.This paper focuses on generating false positive object in images to disturb detection models,and attempts to conduct target attacks based on location and classes.At the same time,a method based on gradient is designed to prove the effectiveness of this attack scheme,which provides a new direction for attack and defense on object detection.3.Perturbation mask and perturbation constraint optimization are proposed.The method of false positive target attack for object detection is improved,and the perturbation mask is designed to transform the global attack into local attack,which reduces the interference of true positive objects and improves the invisibility of the attack.Constraint optimization loss is added to the loss function,which not only ensures high attack success rate,but also improves the visual quality of the adversarial examples,and further reduces the abnormal detection of true positive objects.
Keywords/Search Tags:Adversarial examples, Generative adversarial network, Deconvolution, Object detection
PDF Full Text Request
Related items