Font Size: a A A

Research And Implementation Of Fault Tolerance Enhancement Technology For Deep Neural Networks

Posted on:2022-12-03Degree:MasterType:Thesis
Country:ChinaCandidate:R X SunFull Text:PDF
GTID:2518306764467574Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
With the development of hardware technologies,the calculation ability of computers has been improved rapidly,and deep neural networks are widely implemented in our lives,such as self-driving cars,intelligent safety systems,and intelligent medical systems.Safety-critical systems have strict requirements on reliability and safety.Abnormal execution environments or internal system failures may cause serious consequences.Not only the input-based attack methods such as adversarial inputs or backdoor attacks will lead to a sharp decrease or even collapse in the accuracy of neural network,and weight faults caused by external environment changes or fault injection attacks may also cause the failure of neural networks.Therefore,this thesis analyzes the interpretability of the fault tolerance in the neural networks and improves the fault-tolerant abilities of the neural networks from two perspectives.(1)This thesis proposes an enhanced fault tolerance method for neural networks based on weight remapping from the perspective of weight numerical representation,aiming at the situation that the weights of neural networks are greatly perturbed due to external harsh environments or malicious fault injection attacks.Firstly,the relationship between the weight perturbation and the accuracy is analyzed to obtain what kind of perturbation will have a huge impact on the accuracy of the neural networks.Then,inspired by the theory of random calculation,a weight remapping method based on Gaussian distribution for neural networks is proposed to remap 32-bit floating numbers to weights,and all the weight values are ensured to be within the valid range of the weights.Finally,extensive experiments are done on multiple neural networks.The experimental results show that the proposed method can effectively improve the faulttolerant ability of neural networks.(2)The weight remapping method for the neural networks in the previous study is only for the case where the weight is perturbed to be larger.To be more universal,the resilience of neural networks is explored,and a measurement method of the fault-tolerant resilience of neural networks is proposed.Firstly,this thesis converts the neural network classifier into a high-dimensional polyhedron mathematically,and the resilience of neural networks is obtained by approaching the classification boundary by gradient approximation at a fixed point of a weight.It can be seen from the principle that perturbation will not impact the classification results of neural networks if the weight perturbation is within the resilience.Then,based on the resilience of neural network,the weight remapping method is improved based on the previous study.Finally,the effectiveness of the neural network resilience measurement method proposed in this thesis is demonstrated by extensive experiments.(3)Although the previous method can effectively reduce the influence of weight perturbations,the results of neural networks may be influenced to be wrong due to the calculation accumulation.Thus,this thesis proposes an upper bound-aware fault tolerance method for neural networks from the perspective of blocking the wrong outputs.Firstly,according to the weight or calculation faults of the neural networks,the fault propagation paths and their influence range are quantitatively analyzed,an interpretable fault propagation model is established,and the corresponding propagation conditions are obtained.A gradient-based method for finding the upper bounds of outputs is proposed to determine the upper bounds of the effective outputs of neural networks.Then,a low overhead fault tolerance method is implemented by modifying the activation function.Finally,extensive experiments are conducted on multiple neural network models.The experimental results show that the proposed method can effectively improve the fault tolerance of neural networks.
Keywords/Search Tags:Deep neural network, fault tolerance, weight fault, resilience, perturbation
PDF Full Text Request
Related items