Font Size: a A A

Imaging Perturbation Based Adversarial Attacks For Image Classification And Segmentation

Posted on:2023-12-11Degree:MasterType:Thesis
Country:ChinaCandidate:R J GaoFull Text:PDF
GTID:2558307154474524Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of deep neural networks in recent years,adversarial attacks have become a new research hotspot because of their research value in assessing the robustness of deep neural network models and revealing the potential security risks of deep learning methods.However,most of the existing work is based on noise perturbation,and there are fewer attacks based on non-noise perturbation.Among them,imaging perturbation,as a non-noise natural perturbation,is widely present in natural images and imperceptible to the human eye,which is a greater threat to deep learning methods.However,there is still a research gap in the imaging perturbation based adversarial attacks.Therefore,we address the problem of adversarial attacks for image classification and segmentation tasks,such as image classification,co-salient object detection and face recognition,based on imaging perturbation,and proposes three types of adversarial attacks,i.e.,the adversarial haze attack for image classification,the joint adversarial exposure and noise attack for co-salient object detection,and the adversarial relighting attack for face recognition.We explore the attack approaches and loss functions,and achieves innovative research results:(1)We propose the predictive adversarial haze attack and the predictive adversarial relighting attack with faster computation speed that are completely different from traditional optimization-based adversarial attacks.(2)We also explore the possibility of joint perturbation and conduct a preliminary study in the adversarial haze attack and a comprehensive in-depth study in the joint adversarial exposure and noise attack.Experiments show that joint perturbation demonstrates a higher success rate and can fool the target model more effectively.(3)We propose a class of black-box adversarial loss functions based on feature consistency.These loss functions can solve the adversarial attack problem for co-salient object detection in a black-box attack manner and achieve transferability beyond traditional white-box attacks.In this paper,we conduct ample experiments on benchmark datasets and verify effectiveness of the proposed methods: quantitative analysis shows that our methods achieve attack performance and image quality comparable to state-of-the-art noise based attacks or other baseline attacks under fair settings,and visualization results show that the perturbations applied to the adversarial examples by our methods are imperceptible to the human eye.We hope that this paper can contribute to the development of adversarial attacks based on imaging perturbation as well as other natural non-noise perturbations.
Keywords/Search Tags:Adversarial attack, Imaging perturbation, Image classification, Co-salient object detection, Face recognition
PDF Full Text Request
Related items