| Person re-identification(Re ID)is an image retrieval task that aims to match images of the same person across different surveillance cameras.However,deep neural network based Re ID systems have been shown to be vulnerable to adversarial attacks.To improve the security of Re ID systems,it is essential to investigate adversarial defense techniques for Re ID models.However,the current defense methods are deficient,and to improve the defense performance of Re ID models,the following two works are performed in this dissertation:(1)Among the previous defense methods,those based on image pre-processing tend to reduce the recognition accuracy of clean images,while those based on adversarial example detection only block possible adversarial examples from entering the system and do not really improve the robustness of the system.Therefore,this dissertation proposes a robust Re ID model that not only defends against adversarial attacks,but also maintains recognition accuracy.The model implementation combines the advantages of adversarial example detection and adversarial training.On the one hand,this dissertation proposes a novel adversarial example detection method that is based on perturbation information,has a high detection accuracy,and can purify the adversarial examples by a simple perturbation removal operation.On the other hand,this dissertation also proposes an adversarial attack method for Re ID,and uses the adversarial examples generated by this method to train a perturbation extractor.The experimental results show that the recognition accuracy of this dissertation’s method is significantly improved from 0.50% to 73.97% against the strongest Re ID attack method Deep Mis-Ranking;meanwhile,this dissertation’s adversarial example detection method can achieve a classification accuracy of 96.29%.(2)In addition,this dissertation proposes a hybrid defense method that aims to improve the robustness of the Re ID model against local adversarial attacks.Specifically,this dissertation firstly improves the local adversarial attack method and uses it for adversarial training.Secondly,the robustness of the adversarial training model is further improved by introducing data enhancement methods and consistency regularization loss.Finally,a local gradients smoothing method is introduced to enhance the defense capability from outside the system.The experimental results show that the hybrid defense method can improve the recognition accuracy of the attacked model from 32.33% to 88.02%. |