| In recent years,the rapid development of artificial intelligence(AI)technology and applications,especially the emergence and development of deep learning,has further empowered many AI products,greatly facilitating people’s daily lives.However,many scenarios involving personal and property safety of users,such as mobile payments,face recognition,autonomous driving,and intelligent security,have been proven to be difficult to secure,leading to a gradual decrease in people’s trust in the security of AI.In fact,many studies have already demonstrated the vulnerability of AI systems.In image classification tasks,adversarial attack methods can make the output result wrong by adding imperceptible perturbations to the input data.Many adversarial defense methods have been proposed to enhance the adversarial robustness of artificial intelligence systems.However,many traditional adversarial defense methods are considered challenging to apply in real-world scenarios due to their lack of robustness against unknown attacks.Adversarial training,on the other hand,is regarded as a promising defense method due to its simplicity and the ability to provide certain defense capabilities against all adversarial attack methods.However,the existing adversarial training approaches have shown limited improvements in robustness,highlighting the fundamental and crucial issue of how to further enhance their robustness.Therefore,this paper aims to analyze and improve this problem to propose more effective adversarial training solutions.Adversarial training technique improves the robustness of the target model by generating adversarial samples and using them to enhance data during neural network model training.This article believes that the insufficient robustness of existing adversarial training techniques is due to(1)ignoring the difference in classification between training samples during neural network model training and adopting the same training strategy,leading to poor model training;and(2)ignoring the intrinsic robustness differences of training samples during the process of generating adversarial samples,resulting in poor quality adversarial samples.Therefore,we first analyze three possible situations for training sample pairs and formulates corresponding training strategies,proposing the sample-case-aware adversarial training method(SCAT).Secondly,this article proposes that by adaptively adjusting the total perturbation size limit of the adversarial attack method and the single-step length of iterative attack based on the intrinsic robustness of clean samples,better quality adversarial samples can be generated for training,and proposes the BA-PGD adversarial attack method.Finally,this article combines SCAT and BA-PGD,redefines adversarial training as a new Min-Max optimization problem,and improves adversarial training from two perspectives by combining the advantages of the two methods,proposing the sample-case-aware adversarial training with adaptive attack method(SCAT-AA).Extensive experiments have demonstrated the effectiveness and superiority of the two proposed adversarial training methods and one adversarial attack method. |