Font Size: a A A

Research On Generation Of Class-Correlated Adversarial Examples For Object Detectio

Posted on:2023-09-11Degree:MasterType:Thesis
Country:ChinaCandidate:D X WeiFull Text:PDF
GTID:2568307070952229Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
As the development of deep learning,the technology in object detection has been widely implemented in various fields.While the object detection has achieved great success in many tasks,its weakness based on convolutional neural network is gradually discovered.It is easy to make object detection models show wrong results by applying small perturbations on purpose.Images with small perturbations is called adversarial examples.The study of adversarial examples is helpful to improve the robustness and interpretability of neural network models.The research on adversarial examples gathers on the field of image classification at present,and research on object detection is still finite.Object detection tasks are faced with more complex scenes than image classification tasks.The current adversarial examples generation methods on object detection are mainly faced with the following problems: the traditional methods based on gradient iterations is very slow,and the methods based on generation network can not distinguish attacks against specific categories;On the other hand,the transferability of the attack methods is not satisfactory.On the whole,the main research contents of this dissertation are as follows:(1)A fast generation method of adversarial examples for attacking target category in object detection is studied.By adding the label-information feature extraction module to calculate class activation map,and the generative adversarial network(GAN)is trained to generate adversarial examples quickly.The generator of the GAN is based on the U-Net structure with skip connection,and the discriminator of the GAN is the Markov discriminator Patch GAN.The GAN loss function is designed for training the generator and discriminator of GAN.The classification loss function is designed for the detection results.The distance loss function is designed for the image pixels.The differential attack rate reached 38.89% by ablation experiment.In terms of attack speed,the average generation time of adversarial examples is 0.11 s,which is faster than other methods.(2)An algorithm to calculate class activation map for object detection is studied,and the background loss function is also designed to better distinguish the attack target category and non target category,which improves the effect of the attack method in the end.Feature loss is designed to improve the transferability attack of the attack method.It is verified that the feature loss can improve the transferability of the attack method through the transferable attack experiments on attacking different object detection models with different structures and different backbones.At the same time,the robustness and feature extraction ability of the object detection models are also analyzed in experiments on transferable attack on different backbones.(3)A detection system with adversarial examples is designed and implemented.The system can generate adversarial examples for selected images by choosing different attack methods for different attack categories.What’s more,it can detect the original image and the adversarial example at the same time by using different object detection models,so as to intuitively show the attack effect of adversarial examples.Finally,the generated adversarial examples,the adversarial perturbations and the detection results can all be saved respectively for further research in convenience.
Keywords/Search Tags:Object detection, Adversarial examples, Generative adversarial network, Class activation map, Transferable attack
PDF Full Text Request
Related items