| In recent years,with the rapid development of artificial intelligence technology,face recognition technology has been widely used in various fields.In China,the market share of face recognition technology is getting larger and larger,and the scenes of application are also expanding,including transportation,finance,security,Internet service,intelligent park and many other fields.However,with the wide application of face recognition technology,its security has also become an important concern.Especially in some fields with high security requirements,such as face payment and face authentication,the security of face recognition technology is particularly important.As a result,attack methods against face recognition have emerged.These attack methods may cause face recognition technology to fail or be used by malicious users for illegal purposes,causing harm to society.To better ensure the security of face recognition technology,it becomes important to study the attack methods against face recognition to improve the security and comprehensiveness of face recognition technology.Due to the vulnerability of neural networks,adversarial attacks threaten the security of face recognition systems.For this reason,researchers have proposed various adversarial attack algorithms for face recognition.The main difficulties in adversarial attacks against face recognition systems are as follows: 1)The adversarial examples is too large in perturbation range by attacking in all areas of the face image,and loses the stealthiness of the attack.2)It is difficult to have targeted attacks against face recognition.3)Black-box attacks are close to the real application environment and better meet the attack requirements.However,the counter sample migration ability against face recognition counter attack is too weak,and the success rate of black box model attack is not high.Therefore,this paper studies the aggressiveness of the adversarial samples through the process of attack and proposes corresponding solutions to the three difficulties mentioned above:1)It is proposed to attack the key regions of the face image and ignore the unimportant regions,which can generate high-quality adversarial examples to complete the attack on the face recognition model.In this paper,the area above the eyes and below the eyebrows of the face image is extracted as the key area,and then three gradient attack-based algorithms(FGSM,IFGSM,MI-FGSM)are used to generate the adversarial perturbation on the mask area,and the generated adversarial perturbation has a small attack range,which solves the problem that the perturbation range is too large.2)In order to make the adversarial sample against face attack more threatening,this paper also proposes a universal perturbation attack based on masking,which can cause multiple users to be identified as the target user.3)In order to improve the migration ability of the adversarial sample of face image and the quality of the generated adversarial sample,this paper adopts the method of ensemble attack,and proposes an ensemble attack algorithm based on the mask to improve the aggressiveness and migration ability of the adversarial mask in the black box model.In this paper,we verify the attack effectiveness of the proposed attack method through extensive experiments by preparing face data and multiple face recognition models that need to be integrated to verify the attack effectiveness of mask-based adversarial samples.The experiments show that the proposed adversarial sample attack in this paper has a high success rate and is also highly migratory when performing black-box attacks.Based on this,the mask attack against the face recognition system and the integration attack against the face recognition system are completed. |