Font Size: a A A

Research On Adversarial Examples Attack Strategy Oriented To Product Image Classification

Posted on:2022-07-28Degree:MasterType:Thesis
Country:ChinaCandidate:F HeFull Text:PDF
GTID:2518306605989379Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
Now deep neural networks play an irreplaceable role in the field of artificial intelligence,especially in the areas of unmanned driving,image and speech recognition.However,with the deepening of research on deep neural networks,its security problems have gradually emerged and have been highly valued by researchers.Investigation and research show that in the field of image classification,the existing deep neural network-based image classifiers are very vulnerable to spoofing attacks by malicious samples-by adding image samples generated by perturbations that are difficult for human eyes to deceive depth on clean images The neural network model makes the network model produce wrong output for image classification.This kind of picture samples that are carefully debugged on the basis of the original pictures and can mislead the output of the deep neural network model are called adversarial examples.The threat of adversarial examples to neural networks is quite fatal,especially in some important application systems,maliciously tampering with the input of adversarial examples is likely to lead to unimaginable consequences.Therefore,how to improve defense capabilities has also become a research hotspot.In recent years,the offensive and defensive warfare for image classification has developed rapidly in the competition.This paper studies the classic offensive and defensive strategies at home and abroad in recent years:the attack methods include fast gradient symbol based on gradient FGSM,gradient algorithm based on momentum iteration MIFGSM,deep deception algorithm DeepFool,C&W attack,etc;defense algorithms include adversarial training and based on randomization.,Defense algorithms based on denoising,etc.Although the MIFGSM attack method is a very common and effective attack method in actual competition applications,it still cannot meet the high requirements.This article draws on the experience gained in actual competitions and mainly focuses on the targeted black box attack method.A series of comprehensive enhancement strategies based on MIFGSM integrated attacks.The main innovations are:(1)Inspired by the adverse effects of network overfitting on image classification,it is proposed to dropout the network again to improve the generalization of the generated adversarial examples;(2)Inspired by Gaussian noise smoothing pictures,random noise is added to Logits to smooth the loss and enhance the attack success rate of adversarial examples;(3)Based on excessive iterations,it is not conducive to the accuracy of network classification and the migration ability of generating adversarial examples.Suppose,an adaptive stopping iteration strategy is proposed,and the iteration is stopped in time to improve the migration ability of generated examples and reduce disturbance;(4)Based on the influence of the weights of different models in the integration attack on the success rate of model attacks,an adaptive integration weight adjustment is proposed.The strategy allows the weights to be flexibly adjusted in each iteration when generating adversarial examples to enhance the success rate of attacks on the network model;(5)Due to the influence of the iteration step size on the classification accuracy of the network model,an adaptive iteration step size adjustment strategy is proposed.In order to improve the success rate of attacks adversarial examples.On the basis of the above scheme,a reasonable comprehensive evaluation scheme of "attack success rate" and "disturbance size" is used,and the MIFGSM integration scheme is used as the benchmark strategy.Each scheme in the enhancement strategy is used in the commodity data set and ImageNet.Sufficient individual experiments and combined experiments were carried out on the sub-data set,and the contribution of the enhancement scheme was verified by evaluating the score of the scheme on different network models.Either in actual competitions or offline tests,it is proved that each of the enhancement schemes proposed in this article has a significant improvement in the overall score compared with the baseline strategy.And the contribution points of each scheme are different,some focus on improving the attack success rate,some focus on reducing the size of the disturbance,and at the same time enhance the migration ability of the adversarial examples,and through the selective combination of schemes,unexpected results were finally achieved.
Keywords/Search Tags:Deep neural networks, Image classification, Adversarial examples, Attack and defense
PDF Full Text Request
Related items