Font Size: a A A

Robustness Evaluation Of Convolutional Neural Network Models Based On Prediction Uncertainty

Posted on:2022-04-17Degree:MasterType:Thesis
Country:ChinaCandidate:J Z SuFull Text:PDF
GTID:2518306563964929Subject:Computer technology
Abstract/Summary:PDF Full Text Request
Recently,deep learning technology has played an important role in solving practical problems and has brought major breakthroughs in the development of artificial intelligence.However,studies have shown that well-trained deep learning models are vulnerable to attacks from adversarial examples,and the robustness of models still faces huge challenges.The robustness reflects the stability of models under various normal and abnormal inputs.Especially in safety-critical fields,evaluating the robustness is helpful to better discover the defects of a model and further improve the stability of a model.As a typical deep learning network structure,Convolutional Neural Networks(CNN)are also threatened by adversarial examples in the field of image recognition.Focusing on the robustness of CNN models,this thesis innovatively proposes an evaluation method for the robustness of CNN models based on prediction uncertainty indicators,and finds that uncertainty indicators can be further used for generating test data.The contributions are as follows:(1)In order to verify the feasibility of using the prediction uncertainty indicator to evaluate the robustness of CNN models,this thesis reconstructs the CNN classifier in combination with the characteristics that D-S evidence theory can be used for uncertainty reasoning and decision-making.Then experiments are designed and carried out according to the two uncertainty indicators of conflict and ignorance which are obtained from the reconstruction model in evidential reasoning.Experiments show that the uncertainty indicator can be used to measure the performance of the model,for evaluating the robustness of the CNN models using prediction uncertainty.(2)By analyzing the relationship between the value of information conflict and the intensity of perturbation,this thesis innovatively proposes two robustness evaluation indicators,namely,the conflict upper bound?6)6)88)8)8)8)8))and the conflict lower bound?6)6)88)4)4)4)4)),and designs the robustness evaluation framework of CNN models.Moreover,multiple datasets are used to perform evaluation experiments on different models.Experiments shows that the proposed conflict robustness indicators are effective and can be used to evaluate the robustness of CNN models.(3)Since the current adversarial-based test data generation methods only consider the structure and parameters of CNN models,but do not consider the conflict of example features and test adequacy,this thesis proposes a new test data generation method,which is named Deep Conflict.The method aims to solve the joint optimization problem between maximizing the conflict value of example features and the neuron coverage rate,and generates test data by continually mutating the original image.Experiments show that the generated test data is of high quality and is visually close to the original image.Moreover,Deep Conflict is superior to DLFuzz in terms of the number of generated test data,and superior to FGSM and CW methods in terms of increasing the neuron coverage rate.
Keywords/Search Tags:CNN, Robustness, Prediction uncertainty, Adversarial examples, Neuron coverage rate
PDF Full Text Request
Related items