Font Size: a A A

Deep Learning-driven Causal Effect Estimation And Causal Representation Learning

Posted on:2024-08-08Degree:MasterType:Thesis
Country:ChinaCandidate:Q S BaoFull Text:PDF
GTID:2568307136494844Subject:Software engineering
Abstract/Summary:PDF Full Text Request
Deep learning models can efficiently process massive amounts of data and fit complex non-linear relationships.It has demonstrated powerful performance in tasks such as computer vision and natural language processing and has also provided a powerful support for causal effect evaluation in healthcare and intelligent marketing.However,the non-confounding assumption followed by existing deep causal effect evaluation methods is often too strict,resulting in a bias in the causal effects evaluated from actual observed data.In addition,current deep learning models based on statistical correlations for learning and prediction face challenges such as out-of-distribution generalization,lack of reliability,and interpretability due to their inability to understand causal relationships.Causal representation learning is an effective method for addressing these issues.However,in real-world scenarios,confounding bias that widely exists limits the ability of causal representation learning.To address these issues,this paper first studies how to use deep generative models to disentangle confounding variables to assist in evaluating unbiased causal effects,and then applies the idea of disentangling confounding variables to causal representation learning and proposes a trusted openset recognition model based on causal representation learning.The specific research contents are as follows:(1)To address the limitations of the non-confounding assumption in deep causal effect evaluation models,we propose an individual causal effect evaluation method based on variational generative adversarial networks.This method infers the distribution of latent confounding variables,not only relaxing the non-confounding assumption but also helping to further control confounding variables to evaluate unbiased causal effects.Specifically,first,this method uses a causal graph to model the generative mechanism of observational data.Then a joint optimization strategy of the variational autoencoder and generative adversarial network is used to disentangle the distribution of latent instrumental variables,confounding variables,and adjustment variables.After controlling the confounding variables,counterfactual outcomes are generated by the generative network for the final individual causal effect estimation.Finally,experimental results show that compared with the baseline model,the proposed method achieves consistent performance gains on both real and artificial datasets.(2)To address the issue of poor generalization performance of existing open-set recognition methods due to covariate shift and confounding bias in open-set recognition scenarios,we propose a trusted open-set recognition model based on causal representation learning.This model introduces an evidence metric that incorporates causal representation learning to perceive the trustworthiness of open-set recognition results,enabling it to not only have better generalization performance in recognizing known categories but also can "know what is unknown." Specifically,this model includes an uncertainty-guided adversarial data augmentation module and a causally disentangled representation module.The two modules are jointly optimized based on the relationships between variables modeled by the causal graph,disentangling confounding representations and causal representations to learn causal evidence for final decision-making and calibration of trusted metrics.Finally,experimental results on real and artificial datasets demonstrate the effectiveness of the proposed method.(3)To meet the practical application needs of trusted open-set image recognition,we designed and implemented a trusted open-set recognition prototype system for image classification.This system can provide uncertainty measures for recognition results while giving image recognition results,reflecting the trustworthiness of the recognition results.In addition,by setting the size of the uncertainty threshold for recognition results,the system can automatically filter out unknown or misclassified images,effectively avoiding the risks brought by misjudgments in risk-sensitive scenarios for image classification systems.
Keywords/Search Tags:Causal effect estimation, Causal representation learning, Confounding bias, Covariate shift, Trusted open-set recognition
PDF Full Text Request
Related items