| In recent years,with the iteration of computer hardware and the emergence of various open-source deep learning frameworks,artificial intelligence technologies based on convolutional neural network models have made significant breakthroughs in various fields,triggering a new industrial revolution.However,with the breakthroughs in deep neural network technology,security issues caused by artificial intelligence have also been revealed.Researchers have found that neural network models are susceptible to the influence of fake samples.These perturbations,which are difficult for human eyes to detect,are called adversarial perturbations.The appearance of adversarial samples presents an additional danger to artificial intelligence systems that are sensitive to data security in industries such as finance,transportation,and healthcare.This paper focuses on the above technical background and conducts an in-depth study of the adversarial sample generation algorithm under different scenario conditions,and the main research work is as follows:A color-space-based white-box adversarial sample generation algorithm that relies on local perturbations is proposed to address the issue of generating more covert adversarial samples in white-box scenarios.The algorithm first considers that a neural network classifier only focuses on certain regions during the recognition process and assigns varying weights to different regions of an image.Based on this idea,an image importance evaluation function is used to measure the criticality of each sub-image before launching a gradient attack on the clean sample.The clean image is then segmented into blocks using a transition region segmentation scheme that adapts to the target and background separation,resulting in more accurate blocks.By designing a feature space similarity editing technique and optimizing the color domain mapping operator,the algorithm searches for adversarial samples in the color space.These two schemes are combined to search for adversarial samples in a wider adversarial space.Finally,experimental results demonstrate that the method can maintain good attack performance while preserving high image similarity,small spatial distance,and better concealment.A style transfer black-box adversarial sample generation algorithm based on a combination direction is proposed to address the problem of how to generate more transferable adversarial samples in black-box scenarios.Compared with traditional transferable adversarial attacks,this method first proposes a more reasonable way to measure the intermediate layer difference and uses the common gradients of multiple adversarial samples in the adversarial space as the attack direction to alleviate the overfitting of the generated adversarial samples to the white-box model.Finally,a style transfer method is used to fine-tune the generated adversarial samples in the intermediate layers of the features,and the generated adversarial perturbation is converted into natural pixel perturbation through a decoder.Experimental results demonstrate that our proposed method effectively improves the transferability of black-box models compared with other methods. |