Font Size: a A A

Verification Of Generation Of Counter-example Guided Automatic Repair Neural Networks

Posted on:2024-05-17Degree:MasterType:Thesis
Country:ChinaCandidate:W PangFull Text:PDF
GTID:2568307061491654Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Neural networks are widely used in real-world applications in many important scenarios and fields,such as autonomous driving,medical diagnosis,financial prediction,and aircraft collision prevention.In these domains,errors in neural networks can cause terrible consequences.Formal verification is a rigorous method of neural networks mathematical methods that can prove the correctness of neural networks.For erroneous neural networks,the verification outputs a set of data,which were judged incorrectly by the neural network,called counter-examples.It is a valuable filed that to fix a neural network when it is found to be wrong.This is because retraining a neural network not only involves the collection of the original training data,but there also have no guarantee that the retrained neural network can be completely correct.Therefore,this paper proposes two methods for repairing neural networks automatically: the first method uses a localization technique to identify the layer which has the greatest impact on the counter-example found by the validator,then uses the PSO optimization algorithm to optimize its weights.The second method uses the validator to locate the neuron that has the greatest impact on the counter-example,then uses an optimization algorithm or linear programming to optimize its weights until all counter-examples have been eliminated.The main work in this dissertation is as follows:1.This dissertation proposes a method to achieve the goal of fixing the neural network for a given attribute by modifying the weights of specific layers in the neural network to achieve the goal of fixing the neural network.First,verify the neural networks,if the verification fails,verifier will identify inputs to the neural network that violate the relevant properties,i.e.counter-examples.Then propose two fault localization methods.One is to use Back-propagation to find the specific layer.Another fault localization is to observe the activation pattern of neurons in each layer.Finally,the PSO algorithm is used to optimize the weights of this layer,where the optimization target is a penalty function composed of a loss function and a penalty term.The results of experiments show that for a defined attribute,this method can effectively fix errors in the neural network without introducing new errors due to modification,thus improving the accuracy of the neural network for that attribute.2.This dissertation proposes a method to achieve the goal of fixing the neural network for a given attribute by modifying all of the connection weights of specific neurons in the neural network.First,verify the neural networks,if the verification fails,a set of counter-examples is returned.Then,the Back-propagation was used to locate the neuron.Finally,there are two methods to repair the input weight and output weight of single neuron.One is to use the PSO optimization algorithm,which can modify the neurons of any layer.The other is the Linear programming,which can modify the neurons in the output layer,and this method can achieve a minimal modification of the weights.Experiments show that repairing the weights of individual neurons can effectively eliminate counter-examples without creating new errors.
Keywords/Search Tags:Neural network, Formal verification, Back-propagation, Particle Swarm Optimisation, Linear programming
PDF Full Text Request
Related items