Font Size: a A A

Study On The Pruning Strategies For LiNGAM

Posted on:2016-05-10Degree:MasterType:Thesis
Country:ChinaCandidate:H W LvFull Text:PDF
GTID:2308330461956015Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Causality Discovering is the basis for natural science research. Causation is different from Statistical correlations, since Causality means inference tasks such as prediction, explanation, and intervention. A causal model based on the theory of Bayesian network and with addition assumption such as Causal Markov Condition was proposed by Pearl etc. al. The causal model uses directed acyclic graph to visualize the model and Bayesian network as inference tool. After the widely concerned Bayesian network structure learning, the learning and inference of causal structure called a new hotspot in recent years.Causal structure learning and Bayesian network structure learning both are to rebuild structure between variables from observation data, the difference is the causal structure learning target is to obtain causal relationships, not just correlations. In causal structure learning, the basic question is causation identification. In recent years, LiNGAM model proposed by Shimizu et al. and Additive Noise Model proposed by Hoyer et al. showed that non-Gaussian distribution noise and nonlinear relation between variables plays an important role in recognition of causal direction. Thus, recognizing causality can be divided into find the undirected relation and the causal direction two basic problems.LiNGAM algorithm proposed by Shimizu is one of the classical learning algorithms for the linear non-Gaussian model LiNGAM. We find LiNGAM algorithm pruning method adopted only statistical hypothesis testing, and did not take into count Causal Markov Condition, the basic assumption of the causal model, and requires high runtime complexity, provides relatively low accuracy on Sparse Graph. Pruning task is to determine existence causal relationship, which is one of the basic problems of causality identification. With the conditional independence test method in Bayesian networks, we propose a new pruning strategy. This strategy according to causal order, prunes the causal model by examining the conditional independence based on the Markov blanket. Since the partial correlation coefficients can reflect the conditional independence between linear causal variables in the model, we use partial correlation coefficient test to examine the existence of conditional independence. To determine the existence of causal relation only two conditional independence tests are needed in our method, this is far least than the number of conditional independence tests in Bayesian network structure learning algorithm such as PC algorithm proposed by Spirtes et al. Extensive experiment results on simulation data between our method and other existing pruning method in LiNGAM show that our method achieves higher accuracy and lower rate of error pruning while has a better running time.
Keywords/Search Tags:Causality, Partial Correlation, Conditional Independence, Pruning Method
PDF Full Text Request
Related items