| With the rapid development of deep learning technology,the connection between artificial intelligence and human life is becoming increasingly close.However,in the process of researching and applying artificial intelligence technology,important security issues also arise.Deep learning models are vulnerable to adversarial attacks,resulting in incorrect model classification or prediction results.Most defense algorithms have poor defense accuracy and generalization,and there are also issues with insufficient defense transferability.This paper proposes two effective defense algorithms against adversarial attacks.The main work of this paper is summarized as follows:(1)A new adversarial defense model called CFNet is proposed by combining contrastive learning and frequency domain features to effectively remove adversarial perturbations in adversarial examples.CFNet separates the feature maps obtained from multi-layer convolutional neural network and calculates the similarity between each feature map and the high and low frequency feature maps obtained by applying Gaussian low-pass filter to clean examples.By adjusting the focus on the high-frequency feature map,the adversarial perturbations can be effectively removed,and high-quality reconstructed examples are obtained.Finally,the contrastive regularization(CR)combined with mutual information is introduced to further enhance the robustness and classification accuracy of CFNet.(2)A multi-level adaptive knowledge distillation defense algorithm,called DDNet,is proposed.First,an adaptive weight calculation module is used to enhance the richness and importance of network features.Second,DDNet is designed and implemented to remove adversarial perturbations from adversarial examples by combining knowledge distillation.Furthermore,a defense contrastive regularization(DCR)is designed based on contrastive learning to enhance the denoising and reconstruction effect of the student model,thereby generating target examples that better conform to the data distribution of clean examples and improving classification accuracy.Experimental results show that the pixel-based attack defense accuracy is improved by 3.43%,the spatial-domain attack defense accuracy is improved by 6.52%.Under unseen type attack algorithms,the defense accuracy is increased by 11%. |