Font Size: a A A

Adversarial Attack Against Deep Learning Based Face Recognition Models

Posted on:2022-09-16Degree:MasterType:Thesis
Country:ChinaCandidate:Q Z LiFull Text:PDF
GTID:2518306353979809Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Recently,deep learning has been improved rapidly.Based on the development of deep learning,a large number of AI technologies have been applied to our daily life,such as face identification,voice recognition,autonomous driving and person re-identification.Face identification is the most commonly used AI technology.However,many recent works show that deep neural networks are vulnerable under adversarial attacks.Adversarial examples have attracted great attention from scholars.The adversarial attack refers to the modification of clean examples that is difficult for the human eye to detect.Usually,artificial noise is added to the clean examples to make the model make wrong predictions.The perturbed examples are the adversarial samples.In the task of face recognition,the adversarial attack is to make small changes to the human face images.The research on adversarial attacks against face recognition models is not only significance to the protection of face privacy in the era of big data,but also provides more inspiration for how to defend against malicious attacks and improve the robustness of models.We analyze the pros and cons of current attack methods.On the one hand,the face recognition system usually limits the number of queries users can access the model.At present,some query-based attack methods require a large number queries and are easy to be detected.On the other hand,for the purpose of protecting face privacy,we cannot access the target model.Therefore,we choose transfer-based adversarial attacks,and propose three methods to improve the transferability of adversarial examples:(1)Adaptive angle and length joint optimization(A-C&L).We analyze the shortcomings of only minimizing the cosine similarity between the features of the face adversarial examples and the clean examples as the optimization function,that is,it will reduce the norm of the feature vector of the face adversarial examples,and cause overfitting on the source model,Resulting in poor transferability of adversarial examples.To deal with this problem,we propose a joint optimization of adaptive angle and length,keep the length of adversarial features,which improves the success rate of adversarial attacks.(2)Partial linear back propagation algorithm(PLB).We analyze the linear hypothesis of adversarial examples,that is,the existence and transferability of adversarial examples are due to the linear characteristics of deep neural network models.Based on this assumption,this paper proposes a partial linear backpropagation algorithm,which keeps the forward calculation unchanged,and skip some nonlinear activation function during the backpropagation.(3)Transferability enhancement algorithm based on output of multiple hidden layers(H-ILA).We analyze the shortcomings of the Intermediate Level Attack(ILA),that uses adversarial examples generated by other methods to generate more transferable adversarial examples.ILA ignores the history information of first attack by other methods.To deal with this shortcoming,we propose a transferability enhancement algorithm based on multiple hidden layer outputs,which uses more hidden layer outputs.A large number of experiments have proved that this method can improve the success rate of face adversarial attacks.We prove all three methods can improve the transferability of the face adversarial examples.Beyond the face recognition models,we also attack against image classification models,the PLB and H-ILA can also improve the success rate of the adversarial attacks,which show the effectiveness of PLB and H-ILA.
Keywords/Search Tags:Deep Neural Network, Adversarial Example, Face Recognition, Image Classification
PDF Full Text Request
Related items