Font Size: a A A

Research On Security Of Detection Algorithm Based On Deep Learning

Posted on:2021-09-05Degree:MasterType:Thesis
Country:ChinaCandidate:D H LiuFull Text:PDF
GTID:2518306476952639Subject:Pattern Recognition and Intelligent Systems
Abstract/Summary:PDF Full Text Request
With the rapid development of computer technology,deep learning technology is widely used in various fields.However,the adversarial examples sounded the alarm in the field of artificial intelligence security.Research shows that deep neural networks have natural vulnerabilities and are very vulnerable to be attacked by adversarial examples.In the field of image processing,an adversarial example is a carefully modified image that is visually imperceptible to the original image but can be misclassified by DNN models.Early adversarial attack techniques were limited to the digital world,in recent years,the adversarial attack technology represented by adversarial patches is developing in the physical world,and the targets of attack have also spread from classification models to target detection models.Therefore,it is the most important task in the field of artificial intelligence to deeply analyze the principle and mechanism of all kinds of counter attack technology and to study the effective counter defense technology.Based on the extensive study of various adversarial attack technologies,we delve into the adversarial patch for target detection systems in the physical world and reveal the threats and benefits of adversarial patches to model security and human privacy.In terms of adversarial defense,a new hybrid defense mechanism is proposed for the adversarial samples in the digital domain from the perspective of improving the robustness of the model and processing external input data.The main contents of this paper are as follows:(1)In order to adapt to the interference of various factors in the physical world,affine transformation and other image data enhancement methods are applied to the adversarial patch.At the same time,we test the effect of the improved patch on other detection systems such as SSD and yolov3.(2)In order to overcome the shortcomings of being extremely sensitive to individual abnormal pixels or pixels with large adjacent differences during the generation of the original adversarial complement,the Manhattan distance between adjacent pixel values is calculated and then summed to limit the impact of anomalous values to the maximum.Experiments show that the improved adversarial patch has an avoidance rate of 1.88% higher than the original adversarial patch when it also faces the YOLOv2 detector.(3)In order to overcome the shortcomings of data compression defense affecting the classification of pure samples and traditional adversarial training being vulnerable to new adversarial examples,a hybrid adversarial training defense algorithm which based on the combination of improved discrete cosine transform and Gaussian injection is proposed to improve the robustness of the model.In the test of the Res Net network model,compared with the JPG compression defense,the misclassification of the original sample is greatly reduced,compared with the traditional adversarial training defense success rate increased by 3.8%.
Keywords/Search Tags:deep learning, object detection, adversarial example, robustness
PDF Full Text Request
Related items