Font Size: a A A

Generating Adversarial Image Examples For Deep Learning Models

Posted on:2022-02-06Degree:MasterType:Thesis
Country:ChinaCandidate:Z J ChenFull Text:PDF
GTID:2518306335458344Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
Now human society has entered the era of artificial intelligence.From the mall's Intelligence Guide to Wise Information Technology of med,many walks of life are gradually becoming intelligentized.As the application scenarios become more and more complex,higher requirements are placed on the accuracy and safety of artificial intelligence technology.Facing massive amounts of data,making computers correctly understand the data has become the unanimous goal pursued by many developers.The deep convolutional neural network is one of the most widely used machine learning algorithms.Because of its excellent performance in processing computer vision-related tasks,many computer vision tasks are implemented based on it.Nevertheless,many research results in recent years have confirmed that deep convolutional networks are susceptible to interference from adversarial examples.The existence of adversarial examples has become the biggest security risk in the application of artificial intelligence technology.By summarizing and thinking about the development of related work in recent years,this article proposes improvements to the algorithm that using traditional Perlin noise to generate adversarial examples.Improvements contain optimizing the noise texture,color mapping,and the texture fusion of the noise texture and the original image.Experiment results prove that the adversarial examples generated by the improved algorithm achieve better attack effects on the model.In addition,compared with previous work,this paper innovatively proposes a perturbation method of adding different noise textures in different areas and uses the adversarial examples generated by this method to attack the current mainstream deep learning models.Experimental results show that the way of adding perturbation to the subarea proposed in this paper achieves an attack success rate of more than 90% on the deep learning model and realizes the universal attack work of efficiently attacking the deep learning model under the black-box attack.Subsequently,this article discusses defense algorithms for this attack method and proposes a defense method that uses an auto-encoder that adding filters to weaken noise interference.The experimental results show that the image can be correctly identified by deep learning models after its denoising by auto-encoder.The research significance of this paper is to propose a method for generating adversarial examples of high-efficiency black-box attack deep learning models,and then this paper confirms that there are deficiencies in the learning process of deep learning models that rely excessively on image region features,and this paper perform defense work based on program noise disturbance attacks Discussed.
Keywords/Search Tags:Computer vision, Deep convolutional networks, Black-box attacks, Adversarial examples
PDF Full Text Request
Related items