Font Size: a A A

Research On Adversarial Examples And Defense Methods For Object Detection Model Of Driverless Cars

Posted on:2022-07-03Degree:MasterType:Thesis
Country:ChinaCandidate:D W ChenFull Text:PDF
GTID:2492306338475414Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Today,driverless cars is an important research in the field of artificial intelligence.The object detection model based on deep neural network is a key technology in driverless cars.However,the problem of adversarial examples in deep neural networks may make the object detection model make wrong judgments,which threatens the safety of driverless cars.To solve this problem,the thesis studies the adversarial examples and defense methods for object detection model of driverless cars.A black-box adversarial example generation algorithm with strong transferability is proposed.Based on the adversarial example generation algorithm proposed in this thesis,an object detection defense model with strong robustness is proposed.The specific research contents of the thesis are as follows:(1)The mechanism of the impact of the adversarial examples on the object detection system is explored.Existing research shows that the adversarial examples show differences in different application fields.This thesis first constructs the mainstream object detection models,which have a high accuracy rate for the identification of clean samples.Then the principle of white-box and black-box adversarial attacks are studied.The causes of generating adversarial examples and the characteristics of adversarial examples are analyzed.Finally,the impact of adversarial examples on the security of object detection system is discussed,and the vulnerability of object detection system of driverless cars is analyzed as the basis of theoretical research.(2)A black-box attack algorithm with strong transferability for object detection system of driverless cars is proposed.Based on the mechanism of the impact of the adversarial examples on the object detection system,the application of the object detection system of driverless cars in a strict black-box scenario is comprehensively considered.This thesis improves the MI-FGSM(Momentum Iterative Fast Gradient Sign Method)algorithm,and then combines L∞ perturbation and spatial transformation on the basis of ensemble learning,which significantly improves the transferability of adversarial examples.Compared with other black-box attack algorithms,the attack algorithm proposed in this thesis is more aggressive,more efficient in generating adversarial examples,and has a better attack effect for the object detection model.(3)Based on the adversarial example generation algorithm proposed in this thesis,a defense strategy against adversarial attack is proposed.First,the black-box attack algorithm proposed in this thesis is used as the algorithm for generating adversarial examples in the adversarial training;then this thesis analyzes the impact of noise injection position on defense performance;finally,on the basis of adversarial training,combined with weight parameter noise injection for training,it significantly improves the robustness of the object detection model.Through a large number of experiments on the nuScenes driverless dataset,it is proved that the defense strategy proposed in this thesis effectively improves the robustness of the object detection system in driverless cars.It can effectively alleviate various adversarial attacks against the object detection model of driverless cars,and does not affect the accuracy of the model’s identification of clean samples.
Keywords/Search Tags:Driverless cars, Object detection, Black-box adversarial examples, Defense model, Parametric noise injection, Adversarial training
PDF Full Text Request
Related items