Font Size: a A A

Research Of Adversarial Attack Method In Face Recognition

Posted on:2021-07-04Degree:MasterType:Thesis
Country:ChinaCandidate:J K YanFull Text:PDF
GTID:2518306104487874Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
The development of deep neural networks has made significant progress in face recognition technology,but deep learning models are vulnerable to adversarial attacks.The adversarial sample refers to the input sample generated by deliberately adding subtle interference,which will make the model malfunction.Face recognition models are also susceptible to adversarial attacks.Studying the vulnerability of face recognition models under adversarial attacks helps people better understand the adversarial samples and obtain a robust model.In this paper,we research the adversarial attack in face recognition in the black-box scenario.Aiming at the low success rate of black-box attacks,the Dropout Momentum Diverse Inputs Iterative Fast Gradient Sign Method(DO-M-DII-FGSM)is proposed.The algorithm introduces the dropout operation,and the perturbation corresponding to each pixel in each iteration is emptied with a certain probability,which improves the mobility against samples and enhances the success rate of black-box attacks.Extensive experiments on the Image Net data set show that the black-box attack performance of the DO-M-DII-FGSM algorithm is superior to the Momentum Diverse Inputs Iterative Fast Gradient Sign Method(M-DII-FGSM),The success rate of black-box attacks is increased by an average of 4.88%.In order to balance the adversarial and smoothness of the adversarial samples,a vector distance and pixel smoothness mediated objective function is designed.In the process of generating anti-disturbance training,the distance between the picture vector and the smoothness of the adjacent pixels is also considered.A large number of experiments are carried out on the LFW data set,and Arc Face and Sphere Face are used as alternative models to generate adversarial samples and attack Face Net.The results show that the attack success rate was higher than 62.60%.To solve the problem that the adversarial samples are easily interfered by the background of human faces in the picture,the core region mechanism of the face is proposed.The face core area mechanism limits the generation range of anti-disturbance to the face core area.This area is calculated using the key points of the facial features and isa sub-area of the face image without image background interference and concentration.A large number of experiments are carried out on the LFW data set.The experiments show that the adversarial samples generated using the core region limitation of the face have an average increase in attack success rate of 2.88% compared with no region limitation.
Keywords/Search Tags:Deep learning, Face recognition, Adversarial examples, Adversarial attack
PDF Full Text Request
Related items