Font Size: a A A

Research On Adversarial Examples For Object Detection Tasks

Posted on:2022-02-22Degree:MasterType:Thesis
Country:ChinaCandidate:Y Z NieFull Text:PDF
GTID:2518306740994209Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
Driven by the rapid development of deep learning technology,object detection,as a basic task in the field of computer vision,has become a research hotspot in the industry and academia.However,the neural network is very fragile against the sample attack,which brings a great challenge to the security of object detection.At present,the adversarial example research mainly focuses on the image classification task,but the object detection task has more extensive application in the practical application scene,so the adversarial example research oriented to the object detection is of great research value.In addition,Goodfellow carried out research on adversarials example generation technology in the physical world in 2016,which made the research on adversarial examples originally limited to the object detection task in the digital world transition to the physical world.In the research background of the digital domain,the attacker can directly input the adversarial example into the deep learning model to attack.Among the existing adversarial example generation technologies in this field,the attack idea based on loss function optimization is one of the mainstream attack technologies,but this technology has two limitations.First of all,the existing attack schemes for detection task have the limitation of mutual independence between object classification and position regression attack task.Secondly,the existing baseline attack techniques focus on the attack of the object itself,while neglect to reduce the inferential effect of the object background information on detection.In a physical domain attack scenario,an attacker can spoof an application or system.Most of the existing attack techniques in this field use the intensive tampering attack technique of the original image,but the adversarial examples produced by this kind of technology are easy to be detected.In addition,the actual application of the attack cost,namely the performance of attack migration is still improved.In order to solve the above problems,this paper conducts further research on the adversarial examples in the digital and physical fields.Firstly,in the digital domain,the GWBA(Gaussian Weighted Background Adversarial)loss function of background attack is used to generate the adversarial examples with strong aggression.Then,in the physical field,interpolation attack is used to generate the adversarial examples with strong migration and satisfying the condition of visual imperceptibility.Specifically,the main contents of this study are as follows:1.In the digital world,this paper proposes the GWBA loss function for the first time to solve the problem of mutual independence of classification and regression tasks.This method uses the Gaussian kernel function to assign a classification weight to each pixel in the rectangular box to realize the mutual guidance between the classification network and detection network.It also solves the problem of mutual independence of classification and regression tasks.In addition,this method uses an enlarged positive candidate frame as a newly-added attack object to achieve an attack on background information that is strongly related to the object,and reduces the ability of background information to infer the object.2.In the physical world,this paper proposes an interpolation attack technique for object detection tasks for the first time.This method summarizes the attack constraints and utilizes the size scaling matrix of the image preprocessing process to generate adversarial examples with strong mobility and meet the visual imperceptibility at the same time.Through comparison with experimental results patch attack and so on,it is proved that this method can generate more aggressive adversarial examples under the constraints of visual imperceptibility and strong mobility.
Keywords/Search Tags:Object detection, Adversarial examples, Joint attack, Bilinear interpolation, Background information attack
PDF Full Text Request
Related items