Font Size: a A A

Research On Robustness Verification Of Neural Networks

Posted on:2024-03-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z D ZhangFull Text:PDF
GTID:1528307145496254Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the explosive growth of data and the significant improvement of computing power,deep neural networks have increasingly become an important engine driving social and economic development.The emergence of deep neural networks has profoundly changed human production and life and has a broad and far-reaching impact on the progress of human society.However,as an intelligent component of a system in the safety-critical field,the deep neural network lacks a safety verification mechanism,which leads to frequent safety accidents and has become a key bottleneck restricting the development of neural networks.In order to eliminate security concerns,we start from the robustness requirements of neural networks and fully consider the complexity and large scalability of deep neural networks.Then,we studied the robustness verification of deep neural networks in two complementary modes:white-box-based and black-box-based.In this thesis,we first studied the robustness verification of the Sigmoid-like neural networks,focusing on how to generate counterexamples(adversarial examples)to prove that the neural network does not satisfy the robustness.We further researched and designed the tightest approximation strategy to obtain more precise verification results.We then studied the robustness verification of the Swish neural networks,focusing on how to deal with the nonlinear activation function Swish.We further studied black-box-based robustness verification,which breaks through the network architecture-dependent bottleneck of the network model for any architecture.We finally realized the leapfrog from white-boxbased verification to black-box-based verification.In more detail,the main contents of this thesis are summarized as follows:We proposed a robustness verification framework of Sigmoid-like neural network.Compared with existing work,the framework can generate counterexamples as evidence when the neural network does not satisfy the robustness.In order to improve the preciseness of verification results,we conducted a thorough empirical analysis of existing methods and proposed the concept of "network-wise tightness" to characterize the "goodness" of an approximation approach.Based on the concept of"network-wise tightness",we proposed two provably tightest approximation approaches,leading to more precise verification results.We then use constraint substitution to reduce the number of constraints,thereby improving the verification efficiency.Finally,we extensively evaluated the framework on different datasets and network models.The experimental results show that:(1)Compared with other methods,the verification results obtained by the framework are more precise;(2)When the neural network does not satisfy the robustness,the framework can automatically provide counterexamples as evidence.We proposed a robustness verification framework of Swish neural networks.Unlike the activation functions studied in previous work,such as ReLU and Sigmoidlike activation functions,the Swish activation function is nonlinear and nonmonotonic,and the existing approximation strategies cannot be applied to the Swish function.To this end,according to the characteristics of the Swish activation function,we proposed an approximation strategy for Swish.Furthermore,we combine the approximation strategy and constraint-solving technology to verify the robustness of the Swish neural network.Finally,we extensively evaluated the framework on different datasets and network models.The experimental results show the effectiveness and efficiency of the framework.We proposed a black-box-based verification framework.Specifically,we proposed an abstraction-based training method that embeds robustness properties into the training process so that the verification process no longer needs to consider the internal architecture of the neural network.First,in the training phase,we map perturbed images to training intervals by abstraction before inputting the neural network for training.Then,the framework inputs the abstracted interval into the neural network for training.Finally,in the verification phase,the framework uses the same abstraction function to map the perturbed image into intervals and then uses the neural network to classify the interval.If the classification result of the neural network for the interval is consistent with the classification result of the original image,the neural network is robust;otherwise,the neural network is not robust.Further,we extensively evaluated the framework on different datasets and models.The experimental results show the effectiveness,efficiency,applicability,and scalability of the framework.
Keywords/Search Tags:Black-Box-Based Verification, Counterexample Generating, Linear Ap-proximation, Neural Networks, Robustness Verification
PDF Full Text Request
Related items