Font Size: a A A

Research On Practical Adversarial Examples Generation Based On Deep Learning

Posted on:2022-05-20Degree:MasterType:Thesis
Country:ChinaCandidate:J J HouFull Text:PDF
GTID:2518306731977689Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Deep Neural Networks(DNNs),as an important part of artificial intelligence,have achieved excellent results on malware detection,autonomous driving and image classification.However,recent studies have shown that neural networks are vulnerable to adversarial examples(AEs).These carefully crafted perturbations to the input of DNNs can easily mislead a neural network to make incorrect predictions,and bring serious threads for the safety-critical applications such as face recognition systems and autonomous drives.In order to improve the security of neural networks,the scientific community studies adversarial attacks to find the blind spots of the neural network,and adopts corresponding defense strategies to improve the robustness of the model.Therefore,the research of adversarial examples generation algorithm has very important practical significance for artificial intelligence security.This thesis mainly studies how to generate more realistic adversarial examples,the main works are as follows:(1)A new face attribute adversarial attack framework based on Generative Adversarial Network(GAN)is proposed.The attack ability of the adversarial examples mainly depends on the range of added perturbations.Small perturbations limit the attack capacity of the adversarial examples.However,unconstrained perturbations will reduce the stealthiness of adversarial examples due to obvious perturbations.In order to address this issue,this work proposes a new attribute-based adversarial attack method to attack the face recognition model,using the generative adversarial networks for attribute transformation,and hiding the perturbations in the transformed facial attribute space to generate face adversarial examples in a more realistic way.(2)A practical adversarial patch attacks for multi-size images is proposed.Neural networks have their own inherent input size in image classification tasks.Most of the existing adversarial examples research focuses on attacking pre-processed images of the same size.However,in the real word,the image sizes are different.The adversarial patches generated by the previous methods can only be effective on certain size images.For different size images,the adversarial patches will be invalid.From a more realistic sense,this work proposes practical adversarial patch attacks for multi-size images.The image after patch placement is still visually realistic,but after the image is scaled to a specific size of the model in the data preprocessing stage,the neural network will be successfully deceived.(3)This work designs a large number of experiments to evaluate the above framework and method.In the face attribute adversarial attack,the experiments show that the attack framework can generate face adversarial images more quickly and with high attack success rate than the previous mainstream attack algorithms.The attack success rate can reach more than 90% in attacking different attributes.In the adversarial patch attack,the experimental results show that without changing the original image size,the target attack success rate of the image-dependent adversarial patches can reach 73.8%when the patch size is 5% of the original image,and the non-target attack success rate is88.5%,as the size of the patch expands to 10%,the attack success rate can reach more than 90%;the target attack success rate of the universal adversarial patches is 50% when the patch size is set to 10%,and the success rate of non-target attacks is 73%.When the patch size is 15%,the attack success rate can reach more than 95%.
Keywords/Search Tags:Deep Neural Networks, Adversarial Examples, Generative Adversarial Network, Adversarial Attack
PDF Full Text Request
Related items