| With the rapid development of science and technology,the emergence of mas-sive data and the improvement of computing power,deep learning models have been widely used and shown great performance in pattern recognition,natural language processing,and computer vision.The existence of adversarial examples is currently a major obstacle to the application of deep neural networks in safety-critical fields such as autonomous driving and medical diagnosis.Robustness refers to the in-sensitivity of the model to small disturbances of input or parameters.Evaluating the robustness of a deep model under input disturbances is of great significance for solving the problem of adversarial examples.There have been great advances in robustness evaluation in deterministic neural networks.The parameters of the Bayesian neural network are random variables.The current method needs to sample from the posterior distributions of the parameters,convert the Bayesian neural net-work into a deterministic neural network,and evaluate robustness in combination with the existing methods in the deterministic neural network.Great,sampling-based methods usually come with high computational costs.In this paper,the concept of the minimum disturbance distance lower bound is extended to Bayesian neural networks,the sensitivity of the output to the input disturbance is analyzed layer by layer,and the expression form of the minimum disturbance lower bound is derived.The parameters of Bayesian neural network are random variables,and the hidden layer and output layer nodes are also random variables in the feedforward process.We first use the L_pnorm of the expected output change under input disturbance to define the lower bound of the minimum disturbance,and derive its explicit expression as a measure of the robustness of fully-connected Bayesian neural networks.However,the output of Bayesian neural network are random variables,and it is not sufficient to use the distance between ex-pectations to measure the output change.In this paper,we define the lower bound of the minimum disturbance by the distance between the output distributions under the input disturbance,derive the lower bound of the minimum disturbance distance of fully-connected Bayesian neural network based on Wasserstein distance,and es-tablish the connection between the two evaluation methods through comparative analysis.The lower bound of the minimum disturbance obtained in this paper only depends on the mean value of the absolute value of the neural network weight.For a trained Bayesian neural network,the calculation cost of using the minimum disturbance distance for robust evaluation is very small.Finally,we analyzed the sensitivity of convolution and pooling operations to input disturbances,and ex-tended the minimum disturbance distance lower bound based on expectations to achieve robust evaluation of Bayesian convolutional neural networks. |