Font Size: a A A

Research On Defense Of Deep Learning Adversarial Examples Based On GAN

Posted on:2022-09-06Degree:MasterType:Thesis
Country:ChinaCandidate:G H ZhaoFull Text:PDF
GTID:2518306575959619Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,deep neural networks have developed rapidly and have achieved a series of achievements in the field of artificial intelligence,such as image classification,speech recognition,video detection,and autonomous driving.While deep neural networks bring convenience to people,there are also security issues.Some studies have found that the formation of adversarial samples by adding tiny,invisible disturbances to the image can cause the deep neural network to make misclassifications with a high confidence rate.Therefore,research on adversarial examples defense algorithms based on generative confrontation networks is of great significance to improve the security of deep neural networks.Aiming at the problem of adversarial examples in deep neural networks,an adversarial example defense model based on image-to-image translation and a text verification code defense algorithm with overall disturbance are proposed.The main research contents of this paper are as follows:1.Proposed AECycle GAN(Adeversarial Examples Cycle-Consistent GAN)model based by denoising,GAN and manifold.Adversarial examples are regarded as mixed samples in which two types of features are entangled,and the mutual conversion of adversarial samples and clean samples is essentially the mutual conversion of entanglement and unwrapping of the two types of features.The model has two generators and two discriminators.The principle is to unwrap the features of the adversarial sample from the features of the clean sample,so that the adversarial sample can be converted into a clean sample,and the adversarial features of the adversarial sample can be converted and eliminated.The AECycle GAN model can remove the adversarial features of adversarial samples,keep clean sample features and restore adversarial samples.Through MNIST and CIFAR-10 data set experiments,the defense success rate of this model is 88.65% and 71.29%,respectively,indicating that the AECycle GAN model has the ability to defend against samples.2.While adversarial samples bring security risks to deep neural networks,they also play a role in privacy protection,such as protecting text CAPTCHA from being recognized by malicious programs.In order to strengthen the security of the text CAPTCHA that only allows human identification to pass,a text CAPTCHA defense algorithm is proposed to add an overall anti-disturbance.The algorithm uses a pre-trained multi-label learning classification model with99% recognition accuracy of multi-character text CAPTCHA.On this basis,a defense algorithm is used to generate a multi-character overall anti-disturbance,eliminating the steps of splicing,compositing,and adding colors,and finally superimposing Generate secure text CAPTCHA with anti-sample properties.The experimental results show that the attack accuracy rate of the text CAPTCHA added to the overall counter disturbance is as low as 0.06%,which effectively improves the security of the text CAPTCHA and does not affect the user's rapid identification and use.Not only images,but also text,voice,and graph networks,there are threats adversarial examples.Defense algorithms for deep neural networks play an important role in the field of network security.
Keywords/Search Tags:deep neural network, adversarial examples, features entanglement, text CAPTCHA, multi-label learning
PDF Full Text Request
Related items