| The robustness of neural networks is fundamental to the hosting system’s reliability and security.To improve the verification scalability,over-approximating the non-linear activation functions in neural networks by linear constraints is widely adopted,which transforms the verification problem into an efficiently solvable linear programming problem.As over-approximations inevitably introduce overestimation,many efforts have been dedicated to defining the tightest possible approximations.In this paper,we focus on the linear approximation approach for Sigmoid-like activation functions in robustness verification problems.we first present a tighter linear approximation approach Deep Cert for the robustness verification of Convolutional Neural Networks(CNNs)and tighten the robustness verification of CNNs.Then We conduct a thorough empirical analysis of existing neuron-wise characterizations of tightness and reveal that they are superior only on specific neural networks.We introduce the notion of network-wise tightness and show that computing network-wise tightness is a complex non-convex optimization problem.We bypass the complexity via efficient,provably tightest approximation approach Ne Wise for certain type of networks.Finally,we observe that existing approaches only rely on overestimated domains,while the corresponding tight approximation may not necessarily be tight on its actual domain.We propose a novel under-approximation-guided approach called Dual App to define tight over-approximations and two complementary under-approximation algorithms based on sampling and gradient descent.We evaluate the three approaches on networks with different structures,and the results show that our approaches outperforms other state-of-the-art approximation-based approaches. |