| In recent years,the rapid development of deep learning technology has facilitated its widespread application in diverse scenarios that demand high security and ethical standards.However,the opaque and inexplicable nature of the internal decision-making process of deep learning models has made their credibility increasingly questionable.Existing research has proposed various schemes to explain the outcomes of deep learning models to enhance their interpretability.However,the majority of the current research suffers from the problem that the explanation results are weakly correlated with the model but strongly correlated with the input sample.Moreover,current explanation methods have only addressed the simple causal question,namely,"Why is the output of the model P?" without considering more complex causal problems,such as the P-contrastive causal problem of "Why is the output of the model P instead of Q?" and the O-contrastive causal problem of "Why is the output of the model P when the input is a,and the output is Q when the input is b?".As a result,the existing explanation methods are inadequate for real-world scenarios that involve complex causal problems.Therefore,it is crucial to investigate how to develop targeted and adaptive deep learning model explanation methods for different causal problems.This article expands the applicable scenarios of existing deep learning model interpretation and conducts a comprehensive study on various causal problems of deep learning models in image classification for the first time.Starting from three scenarios of simple causal problems,P-contrast causal problems,and O-contrast causal problems,it explores the key technologies of visual interpretation of deep learning models,provides adaptive visual interpretation solutions for different causal problems,and achieves a deep understanding of model interpretability.For explaining the simple causal problem,this paper proposes a visual explanation scheme CCE based on counterfactual contrast,including three modules: sparse counterfactual sample generation,weighted category activation feature map generation,and contrast saliency map generation.The basic idea of CCE is to generate counterfactual samples that reduce the probability of the target category in the simple causal problem and use the difference in the representation of the deep learning model on the original sample and the counterfactual sample to determine the region in the image that plays a decisive role in the target class to achieve visual explanation of the deep learning model.For explaining P-contrastive causal problems,this paper proposes a targeted counterfactual example contrastive visual explanation scheme CCE-P.Specifically,on the one hand,CCE-P first proposes a targeted counterfactual sample generation algorithm that enables the target model to classify the counterfactual examples as contrasting class Q.On the other hand,CCE-P proposes an original sample perturbation strategy to ensure that the saliency map accurately marks the key regions with semantic meaning,and uses positive and negative saliency maps to highlight the regions that have a decisive impact on different decision results of the model.For explaining the O-contrastive causal problem,this paper focuses on analyzing the common O-contrastive causal problem named adversarial examples in image classification.Specifically,on the one hand,we propose an adversarial example influence quantification algorithm based on region decision sensitivity measurement.On the other hand,we propose a counterfactual visualization explanation algorithm CCE-O for adversarial examples to realize the visual interpretation of the original image and the adversarial examples,respectively.Finally,this paper conducts experimental evaluation on several real image datasets and multiple classification models and verifies that the proposed scheme can achieve effective visual interpretation of image classification models in multiple causal problem scenarios from multiple perspectives,such as qualitative analysis,quantitative analysis,and user research. |