| In recent years,with the rapid development of neural networks in the field of artificial intelligence,deep neural networks have been widely used in image recognition,automatic driving and other fields.However,some studies have found that deep neural networks are vulnerable to small disturbances that cannot be detected by the naked eye.interference,resulting in output errors.The existence of adversarial attack phenomenon seriously affects the application scope of deep neural network.In order to ensure the security of deep neural network application scenarios,it is particularly important to study the phenomenon of adversarial attack and the generation of adversarial samples.Aiming at the problems of long generation time of adversarial samples,obvious disturbance,and low attack success rate,this paper studies the generation of malicious input disturbances in neural networks.The main work is as follows:An adversarial sample generation algorithm based on the neural network decision boundary is proposed,which combines the generative adversarial network technology GAN with adversarial attacks,punishes the generator for generating pictures through the target model,and continuously optimizes the generator,so that the GAN generation is close to the decision boundary of the target neural network.At the same time,an adversarial training experiment is designed.After adversarial training using the adversarial samples generated by this method under the CIFAR-10 dataset,the robustness of the neural network model reaches 81.31%,which is 10%higher than common adversarial training methods such as TRADES and CAT-20%.A method of generating adversarial samples by attacking the hidden layer is proposed,and combining the adversarial attack with the attention mechanism,the features of different channels in the image are extracted through the attention mechanism,and then input into the neural network model as the hidden layer attack.Prior knowledge,thereby improving the strength of generated perturbations and the transferability of adversarial examples.The attack success rate of this method on the robust model TRADES in the CIFAR-10 dataset reaches 47.2%,and it also shows strong transferability in the face of other black-box models.Designed and implemented a large-scale adversarial sample simulation and verification platform.The platform can simulate and verify the generation and defense process of neural network Trojans,and provide a visual operation interface to solve the inconvenient management of adversarial samples,and to communicate with users in the process of generation and defense.Poor interactivity,etc.The function test and performance test are carried out for the adversarial sample simulation verification platform,including user login registration,adversarial sample attack and defense simulation,batch sample test,request delay test,etc.A total of 6 types of test items,the test results verify the usability,security,usability and reliability of the system,and application scenarios that meet user needs. |