| With the popularity of neural network technology,more and more neural networks are embedded into other systems,such as driverless vehicles,using neural networks to identify traffic signs.But at the same time,attacks against neural network models are becoming more and more frequent.One kind of attack is a neural network backdoor attack based on data poisoning.Some existing neural network backdoor attacks have some problems.For example,the backdoor trigger generates complexity,the attack success rate is low,the model performance loss is high,the backdoor trigger is obvious so that human eyes can detect abnormalities,and the backdoor model cannot resist defense detection methods.Given the above problems,this thesis studies the construction method of malicious input from deep neural network backdoor data poisoning.The main results include:(1)A neural network backdoor trigger construction method based on random perturbation is proposed.First of all,it is determined that the backdoor trigger should meet the requirements of simple generation,high attack success rate,small performance loss,invisible to the human eye,and small difference between poisoned data and clean data.Then a backdoor trigger construction method is proposed based on random perturbation.Finally,it is verified by comparative experiments that the backdoor trigger construction method proposed in this thesis has a fast generation speed;the model performance drops by about 3%,the attack success rate exceeds 95%,and is invisible to the naked eye,SSIM is greater than 0.97,and LPIPS is less than 0.02.(2)A backdoor neural network model training method based on poisoned data is proposed.First,it is determined that the proposed backdoor model training method is suitable for both single-user centralized training scenarios and multi-user federated learning training scenarios.Then a backdoor model training method is proposed based on the poisoned data.Finally,it is verified by experiments that the backdoor model training method proposed in this thesis can obtain a backdoor neural network model with excellent performance in two neural network model training scenarios.At the same time,the proposed method is robust and can resist ABS,neuron pruning,etc.defense and detection methods.(3)A neural network backdoor attack verification platform is designed and implemented.First,three major requirements for trigger generation,backdoor model training,and backdoor attack to be implemented by the platform are clarified.Then three functional modules are designed and implemented according to the requirements,and the visualization is realized.Finally,after testing,the platform can normally generate backdoor triggers and poisoning data and train a backdoor neural network model,which can effectively carry out backdoor attacks on the target neural network. |