Font Size: a A A

Research On Physical Adversarial Example Generation Technology For Object Detection And Recognition

Posted on:2024-03-23Degree:MasterType:Thesis
Country:ChinaCandidate:W L ZhangFull Text:PDF
GTID:2568307100473494Subject:Cyberspace security
Abstract/Summary:PDF Full Text Request
At present,artificial intelligence technologies represented by deep learning have achieved success in numerous fields,especially in the field of computer vision.Models and algorithms for image recognition,object detection,and instance segmentation based on deep neural networks are gradually being applied to practical systems,playing a crucial role.However,the existence of adversarial examples exposes significant security risks inherent in deep learning models.By adding subtle perturbations to the original data,powerful deep learning models can be led to make incorrect decisions or suffer performance degradation.Particularly in real-world application scenarios,physical domain adversarial examples pose a threat.Conducting research on the generation of physical domain adversarial examples for object detection and recognition models in realistic scenarios not only helps to gain a deeper understanding of the security risks and vulnerabilities faced by models but also facilitates improving the robustness and security of models through adversarial training.This thesis focuses on deep learning models for object detection and recognition,starting from the real-world application scenarios of adversarial example attacks,and conducts research on the generation of physical domain adversarial examples.The purpose is to enhance the attack effects of adversarial examples in realistic scenarios,provide technical support for improving the security of models,and enrich the security testing methods for models.The main work includes:1.To address the problem of low effectiveness of existing adversarial examples in occluded face recognition model attacks,we propose an attack strategy and interference location adaptive adversarial example generation method.The mainstream methods for occlusion recognition mainly include local feature enhancement and recognition after repair of the occluded region,and the traditional attack methods do not consider local feature enhancement and the impact on the antiinterference being repaired.Firstly,the adversarial example generation strategy is adjusted according to the target model and the interference region is automatically adjusted according to the input face.Second,by focusing the perturbation on regions with greater impact on recognition,combined with integrated models and Gaussian filtering,a black-box attack is achieved on local feature-enhanced Rainbow Soft and Baidu face recognition;the success rate of undirected attacks exceeds 40% and the success rate of directed attacks exceeds 15%.Finally,the combination of dynamic mask and dynamic perturbation multiplier avoids redundant computation during the attack,and the average number of iterations is reduced by about 33% compared to MI-FGSM,ensuring the sustainability of the attack.The generated perturbation makes the face repair occlusion recognition model mis-segment the occlusion region,which in turn reduces the recognition accuracy of the model.The attack strategy and disturbance location adaption provide useful support for the research of physical adversarial example generation techniques.2.To address the problem of non-rigid deformation in the loading process of physical adversarial patches,we propose a method for generating physical adversarial examples based on the TPS(Thin Plate Spline)transformation.Firstly,we analyse the characteristics of the nonrigid deformation of the physical adversarial patch in the actual loading process,and propose a nonrigid plane physical attack modelling method based on the TPS transformation.TPS transformation combined with the Shi-Tomasi corner point detection algorithm to obtain more deformation control points.The success rate of the TPA target disappearance attack in the digital domain is improved by nearly 40% compared to that of Adv Tshirt,and the target misclassification attack is improved by nearly10%.The non-rigid deformation transformation method based on TPS transformation can effectively improve the robustness of the generated adversarial patches in the physical domain,and TPA-with the introduction of this method improves the attack success rate by 38.9% for target disappearance attacks and 34.8% for target misclassification attacks compared to TPA-without.The physical adversarial example generation technique facilitates the construction of more robust models and training strategies.The introduction of physical adversarial examples during training allows the model to learn to identify and resist these perturbations,improving performance in real-world application scenarios.3.To address the lack of research on radial aberration security in fisheye cameras,we propose a semi-transparent physical adversarial example generation method U-ODA(Universal Object Detection Attack)for angularly rotatable fisheye camera personnel detection models.Traditional adversarial example generation methods do not take into account the impact of fisheye lens distortion on imaging,resulting in a much weaker attack capability.A translucent physical adversarial patch was designed for the rotatable-angle fisheye camera person detection model RAPi D(RotationAware People Detection),and an effective attack was achieved by mounting it on the lens of the fisheye camera.In addition,considering the continuous image acquisition process of the human detection model,the fisheye images were divided into different image sets according to the intensity of the change of the before and after frames,and a generic adversarial example was generated by combining the semi-transparent physical patch,and for three different image sets of fisheye detection scenes,the error rate caused by the target disappearance attack exceeded 75%,and the error rate caused by the target forgery attack exceeded The error rates due to target disappearance attacks exceeded 75% and those due to target forgery attacks exceeded 80% for all three different fisheye detection image sets,showing good generalization and robustness of image transformation.
Keywords/Search Tags:deep neural network, adversarial example, object detection, face recognition, physical attack
PDF Full Text Request
Related items