Font Size: a A A

Research On Fault Injection And Fault Tolerance For Neural Networks

Posted on:2022-11-19Degree:MasterType:Thesis
Country:ChinaCandidate:X G MaFull Text:PDF
GTID:2518306764499784Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the rapid development of deep learning,the application of convolutional neural networks is becoming more and more widespread.The security research of neural networks is particularly important due to the increasing security requirements of neural networks in many critical fields.In recent years,scholars' research has shown that the operation of the whole neural network can be disturbed by adding perturbations at the input of the neural network,however,such interference at the input is often easily detected.In this paper,we focus on the error inside the neural network model,which is the bit-flip error.Such errors are usually caused by changes in the external environment,and such changes can cause bit-flip errors in the weights of the neural network model that was originally running on the hardware platform.The occurrence of errors in the weights can degrade the accuracy of the entire model or even cause the entire model to malfunction.In this paper,we use common neural network models to study bit-flip attack methods and fault-tolerant methods to cope with bitflip.The main research elements of this paper are as follows.(1)the article designs a fault injection model for neural networks,which tests the neural networks with three fault injection methods: full network injection,fixed-point injection and hierarchical injection,respectively.Alex Net,VGG16,and Googlenet neural network models are selected for the experiments to analyze the effects of attacks on the number of neural network weights,bit-flip direction,fault injection location,and the location of different layers of the model.The experimental results show that the more the number of weights the less the model is affected by the attack;in the flipping direction,the experiments show that 0?1 flipping has the best effect on the model;in the fault injection position,the experiments show that the fault injection has the greatest effect only on the exponentially effective bits;in the layered experiments,the results show that the fault injection in the backward layer has a greater impact on the experimental results.(2)To address the problem that the bit-flip attack is not effective on the quantized neural network,the paper proposes a double-layer search bit attack method,which combines double-layer search and gradient ranking,and improves the attack efficiency of the neural network by traversing every two layers in the neural network and finding the weak bits in the two layers for bit-flip attack according to the gradient ranking.The experimental results show that the attack efficiency of the neural network can be greatly improved by the double-layer search bit algorithm,and the attack speed and attack effect are greatly increased compared with the single-layer search bit algorithm,which solves the problem that the bit-flip attack is more difficult to attack in the quantized neural network.(3)To address the problem that ordinary pruning defense cannot effectively combat bit-flip attacks,the paper proposes a defense model based on weight sharing,which improves the robustness of the model by using weight sharing methods without changing the structure of the original neural network model.In this paper,we first find the key neurons,split the key neurons,and apportion the weights of the key neurons in the model to weaken the impact of the key neuron weight destruction on the whole model.The experimental results show that the neural network model after weight apportionment is less affected by bit-flip attack under the same neural network complexity.The weight-sharing model has better robustness in different dataset experiments.
Keywords/Search Tags:Neural networks, Bit-flip, Adversarial networks, Robustness, Fault tolerance, fault injection
PDF Full Text Request
Related items