| With the rapid development of deep learning research,artificial intelligence security has gradually become a hot topic in the field of artificial intelligence(AI),with numerous applications in various security domains.However,many studies have demonstrated that deep learning-based AI models are susceptible to adversarial attacks,which involve adding perturbations to data samples to deceive AI models and cause serious security issues.Adversarial attacks not only affect traditional AI models,but also have a significant impact on novel AI models such as federated learning.In response to the current state of AI security,this thesis proposes adversarial attack schemes on object detectors in the real world and on federated learning.(1)This paper investigates adversarial attack schemes on object detectors in the real world,and proposes a novel adversarial attack algorithm called Misleading Attention and Classifier Attack(MACA).Specifically,this paper proposes a scheme to generate adversarial patches to deceive target detectors,and limits the noise in adversarial patches to ensure their visual similarity to natural images,thereby improving their visual aesthetics.At the same time,this scheme simulates complex external physical environments and 3D distortions of flexible objects to increase the robustness of adversarial patches.Through 2D image,3D model simulation,and real-world experiments,this scheme successfully attacks the latest object detectors(such as Yolo-V5),demonstrating its strong universality when facing different object detectors.A large number of experiments show that transferring digital adversarial patches to the real world is feasible,while achieving the transferability of adversarial patches between different models.(2)This paper explores the adversarial poisoning attack on federated learning,where a small number of malicious participants insert adversarial samples into their training data to disrupt the global model’s accuracy and achieve their goal of data poisoning.Specifically,this paper uses the Universal Attack Perturbation(UAP)algorithm to launch an adversarial poisoning attack on the federated learning model,where the adversarial samples generated by the attack only add a small amount of perturbation,resulting in pixel-level noise.With just a few malicious participants,the federated learning model can be significantly misled.The paper also proposes a malicious parameter detection algorithm for federated learning to effectively defend against label-flipping attacks.The experiment compares the proposed approach with other existing poisoning attack methods such as Gaussian noise attack,random label-flipping attack,and targeted label-flipping attack.The results show that adversarial attacks with a small number of malicious participants can significantly reduce the performance of the federated learning aggregation model.The proposed malicious parameter detection algorithm can effectively mitigate the impact of label-flipping attacks on the federated learning aggregation model. |