Font Size: a A A

Research Of Transferable Adversarial Attack For Image Classification And Object Detection

Posted on:2022-05-18Degree:MasterType:Thesis
Country:ChinaCandidate:H H LiFull Text:PDF
GTID:2518306602967099Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the improvement of the quality of massive data annotations and the substantial increase in hardware computing power,deep learning technology has achieved rapid development.Deep learning models represented by convolutional neural networks have achieved great success in tasks such as computer vision.However,visual models based on convolutional neural networks have problems in robustness and security,and are vulnerable to attacks from adversarial examples.From the perspective of attacks of adversarial examples,this thesis conducts research on adversarial attacks on the two basic computer vision tasks of image classification and object detection.The adversarial example technology was first proposed in the field of image classification.By adding carefully designed artificial perturbation to clean pictures,the convolutional neural network produces misclassifications.Although there have been many research on adversarial examples in the field of image classification,most of these research are whitebox attack methods,that is,the parameters and structure of the model can be obtained.However,in real scenarios,the model structure is mostly unknown,which is a typical black box attack scenario.At the same time,computer vision tasks such as object detection and image segmentation are all fine-tuned based on the image classification model.Therefore,this thesis also conducts research on adversarial attacks on target detection tasks at the same time.In recent years,object detection based on anchor-free technology have sprung up,and the performance has gradually caught up with or even surpassed those based on anchor-base technology.However,there are few discussions about adversarial attack against object detection based on anchor-free methods at present.Aiming at the problems that there are few black box attack methods for image classification and the adversarial attacks against object detection based on anchor-free technology are rarely researched,this thesis mainly proposes a projection gradient descent adversarial example generation method based on Nesterov momentum(Nesterov Projected Gradient Descent,N-PGD)for image classification tasks and an adversarial example generation method based on anchor-free technology(Anchor-Free Attack,AFK)for object detection tasks from the perspective of optimization.The specific works are as follows:(1)A N-PGD adversarial example generation method based on Nesterov momentum acceleration optimization is proposed for image classification tasks.Aiming at the problem that the white-box attack method PGD is easy to "overfit" the attacked white-box model,which affects the black-box attack ability of the algorithm,this thesis is inspired by the blackbox attack method MIM,replaces Momentum momentum with Nesterov momentum and introduce Nesterov momentum to the iteration of PGD algorithm.In the process,the N-PGD algorithm is proposed,and the second derivative of the objective function is used to accelerate the convergence process of the optimization.In this thesis,white-box attack experiments were performed on two data sets of MNIST and CIFAR10.1000 images selected on the ILSVRC 2012 data set were simultaneously tested for white-box attacks and black-box attacks,and compared with FGSM,PGD and MIM methods on the evaluation of attack success rate.The experiments prove that the N-PGD has excellent white-box attack capabilities on the MNIST and CIFAR10 datasets,meanwhile,the N-PGD method has awesome black-box attack capabilities on the 1000 pictures of ILSVRC 2012 data set.(2)An AFK method is designed for the object detection based on anchor-free technology.This method uses high-level semantic information to determine the location of key points of the objects,and uses mask and gradient information to generate local regional adversarial perturbations.This thesis applies the AFK method to experiment by attacking CenterNet with different backbone networks on 1000 pictures of the two data sets of PASCAL VOC and MS COCO.The results show that AFK method has good migration on the PASCAL VOC data set compared with the DAG method.It can attack the detector based on anchorfree technology,and can also migrate to the detector based on anchor-based technology.On the MS COCO data set,the adversarial examples generated by attacking CenterNet with different backbone networks are robust and can be migrated between CenterNet with different backbone networks.
Keywords/Search Tags:Adversarial Attack, Convolutional Neural Networks, Image Classification, Object Detection
PDF Full Text Request
Related items