| With the gradual deepening of research on unmanned driving,the construction of training data sets has become a vital role,which makes launching network attacks from the model training link a direct and effective attack method.In the process of unmanned driving,data poisoning attacks can cause vehicles to violate traffic rules and even cause traffic accidents,with huge potential harm.Data poisoning adopts the method of injecting malicious samples,disguised samples and other toxic samples into the training set to change the parameters of the model to achieve the purpose of destroying the integrity and usability of the model.This article mainly focuses on common convolutional neural networks,proposes two methods for making toxic samples,and uses three traffic sign data sets to verify the feasibility of the scheme.The specific content is as follows:(1)In order to effectively address the problem of data amplification that can enhance the generalization and robustness of the model,and further improve the success rate of attacking the target model processed by data amplification,design a training strategy that allows toxic samples to adapt to data amplification in advance.As the toxic samples are trained to the best state,the data amplification operation will not affect the attacks of the toxic samples on the target,and still enable the target model to make wrong decisions on the target sample(classify it into the category specified by the attacker).The experimental results show that the attack scheme can still complete an effective attack after the trainer uses the data to amplify the preprocessing training set,and then train the model.(2)Aiming at the problem of insufficient concealment of triggers in backdoor attacks that use poisoning as a strategy,this chapter considers the creation of triggers from the aspect of generating a confrontation network.In order to make the attack more in line with the objective reality,it is necessary to produce a toxic sample that is highly similar to the original sample.Therefore,the continuous game between the generator and the discriminator is used to gradually improve the similarity between the sample with the trigger and the original sample,and the trigger is constrained The L2 norm minimizes the flip-flop.Train a trigger generator that can be made invisible to the human eye,and only need to input Gaussian noise to the trained generator to generate a targeted trigger.Adding a self-attention mechanism to the generator can focus on the trigger that can activate the backdoor,so that the generator can quickly capture the feature distribution of the trigger,and train a generator that can generate a specific trigger.The models were evaluated for poisoning attacks on the CTSRB,GTSRB traffic sign datasets,and MNIST datasets.The results proved that based on the generation of backdoor attacks against the network,the method of poisoning can also improve the concealment,attack success rate,and specificity of the attack. |