Font Size: a A A

A Generic Imitation Attack Against Face Recognition Systems

Posted on:2022-10-26Degree:MasterType:Thesis
Country:ChinaCandidate:C J ZhangFull Text:PDF
GTID:2518306491466454Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the development of deep learning,the accuracy of current face recognition methods is higher than the human level.With the rapid development of domestic face recognition applications,the market share are increasing,and their application scenarios have covered security,finance,smart Park,transportation,Internet services and other fields.With the use of face recognition technology in more and more areas with high security requirements,such as face payment,face authentication,the security of face recognition has attracted people's attention.The attack methods for face recognition system are closely related to the development of face recognition technology.The research on the attack methods is conducive to the more comprehensive and healthy development of face recognition.By solving all kinds of attacks on face recognition,it can promote the formation of face recognition with better security,which is better accepted by the market and develops rapidly.The adversarial attack methods can be divided into targeted attack and non-targeted attack.The target attack means that the adversarial example misleads the neural network to recognize it as a fixed target,while the non-target attack only requires the neural network to recognize it wrongly.In face recognition attack,target attack is more difficult than non-target attack,but its harm is also stronger.In order to further explore the security of face recognition system,this paper designs and implements a targeted attack against face recognition system,namely imitation attack,which makes the adversarial example imitate a fixed target.In this paper,by generating a pair of adversarial glasses,users will be recognized as a specific attack target after wearing the adversarial glasses.On this basis,this paper further studies the method of increasing the attack range of the adversarial glasses,and completes the universal imitation attack against the face recognition system,that is,any user wearing the adversarial glasses will be recognized as a specific target.The universal imitation attack can attack all face images with one pair of adversarial glasses.At this time,the generated adversarial glasses are only related to the model and are no longer limited by the input image,which is more harmful to the face recognition system.At the same time,plenty of experiments are carried out to verify the effect of the proposed method.By preparing the face data and the corresponding face recognition model,the attack effect of the proposed adversarial glasses is tested.Plenty of experiments show that the attack accuracy of the proposed method is high,even in the face of unknown images,it still maintains a certain attack effect,that is,it can complete the imitation attack on unknown images.Research on face recognition from the perspective of attack is to improve the face recognition technology in an all-round way,especially the ability of anti-interference and antiattack.In this paper,we hope that through the proposed universal imitation attack method for face recognition system,more researchers will pay attention to the security of face recognition technology,and promote the academia and industry to jointly solve the security risks in face recognition applications,so as to comprehensively improve the face recognition technology to a more secure and comprehensive direction.
Keywords/Search Tags:Adversarial examples, Face recognition, Adversarial attack, Adversarial glasses
PDF Full Text Request
Related items