Face anti-spoofing detection is now widely used for security verification and to protect face recognition systems from presentation attacks.However,existing studies mainly focused on the detection accuracy and generalization of models,while ignoring the security of model itself.Meanwhile,as researchers have discovered the threat of adversarial attacks in DNNs,it is worthwhile to investigate whether there are similar security concerns in face anti-spoofing detection models.Therefore,in this paper,we conduct an in-depth study on the security of models based on adversarial attacks,and mainly from the following two aspects.1.The ability of models to resist white-box and black-box attacks are explored by combining block attacks,single-stream attacks in multi-stream model,and multi-stream attacks.To address the vulnerability of face anti-spoofing detection model against adversarial examples,we first propose a power spectrum-based adversarial example detector from the perspective of frequency analysis to effectively detect adversarial perturbations in RGB images.The aim is to prohibit such images by analyzing the spectral energy changes caused by perturbation gathering.Second,to improve the security of multimodal model,we propose a defense method to prompt the model to learn perturbation features while suppressing the high-frequency adversarial information.Experiments show that the defense method proposed in this paper can better balance detection accuracy and robustness when the model encounters adversarial examples.2.By implementing two types of backdoor attacks in face anti-spoofing detection models and deepfake detection models,we demonstrate the backdoor threat in face authenticity task.To address the backdoor threat of face anti-spoofing detection models,and the difficulty of existing backdoor defenses to maintain good results in both visible and invisible triggers.In this paper,we propose a continual learning-based residual attention network to achieve the effect of filtering poisoned samples.The proposed defense approach thinks about the identification of triggers from the perspective of attack process,i.e.,the success of backdoor attacks shows that DNNs have the ability to match triggers.At the same time,in order to maintain a good recognition effect among multiple triggers,we incorporate a continual learning strategy,thus forming a complete network for poisoned sample recognition.Experiments show that the proposed defense method has better filtering effect,and the network trained by the filtered dataset can make the backdoor ineffective. |