Font Size: a A A

Deep Learning Model Interpretation Methods Based On Counterfactual Image Generation

Posted on:2024-04-08Degree:MasterType:Thesis
Country:ChinaCandidate:S D ChenFull Text:PDF
GTID:2568306941992979Subject:Computer technology
Abstract/Summary:PDF Full Text Request
With the continuous development of deep learning,the application of deep learning systems has become increasingly common,bringing great convenience to people’s lives.However,complex deep learning models are considered black-box models,making it difficult for people to explain their decision-making processes.There is an urgent need to improve transparency and interpretability.Counterfactual explanation is a common technique that can be understood as simulating changes in certain features of the data to infer how the model’s decision would change accordingly.In this process,the model generates a set of counterfactual data,which are hypothetical input data used to explain the model’s decisions.Although current counterfactual explanation methods have shown promising performance,they often overlook the semantic consistency of the generated results,resulting in explanations that are difficult for humans to understand and are unrealistic.Therefore,this paper focuses on deep learning models and investigates effective counterfactual explanation techniques.Firstly,to address the problem of semantic inconsistency in counterfactual visual explanations(CVE),a deep clustering-based counterfactual explanation generation model is proposed.This method incorporates semantic constraints through a deep clustering module,which forces the model to replace spatial units that are semantically similar or consistent.Experimental results demonstrate that the proposed model outperforms CVE methods on the CUB200 public dataset.Secondly,to overcome the limitation of using a single perturbation image in CVE methods,an improved CVE method based on multiple perturbation images is proposed.This method utilizes multiple perturbation images to expand the model’s search space for spatial units,enabling the model to find more discriminative spatial units while reducing the number of edits.Additionally,the paper reduces the computational complexity of the model by incorporating an image cropping module.Finally,the paper validates the performance of the overall model on the CUB200 and Stanford Dogs datasets.Experimental comparisons are conducted between the proposed model and existing counterfactual explanation generation methods.The visual results and various evaluation metrics demonstrate that the proposed model outperforms other methods,generating explanations that are more semantically consistent,more discriminative,and with lower computational complexity.
Keywords/Search Tags:Deep Learning, Counterfactual Interpretation, Deep Clustering, Image Matting
PDF Full Text Request
Related items