Inference of Graphical Causal Models: Representing the Meaningful Information of Probability Distributions

Jan Lemeire, Kris Steenhaut
Proceedings of Workshop on Causality: Objectives and Assessment at NIPS 2008, PMLR 6:107-120, 2010.

Abstract

This paper studies the feasibility and interpretation of learning the causal structure from observational data with the principles behind the Kolmogorov Minimal Sufficient Statistic (KMSS). The KMSS provides a generic solution to inductive inference. It states that we should seek for the minimal model that captures all regularities of the data. The conditional independencies following from the system’s causal structure are the regularities incorporated in a graphical causal model. The meaningful information provided by a Bayesian network corresponds to the decomposition of the description of the system into Conditional Probability Distributions (CPDs). The decomposition is described by the Directed Acyclic Graph (DAG). For a causal interpretation of the DAG, the decomposition should imply modularity of the CPDs. The CPDs should match up with independent parts of reality that can be changed independently. We argue that if the shortest description of the joint distribution is given by separate descriptions of the conditional distributions for each variable given its effects, the decomposition given by the DAG should be considered as the top-ranked causal hypothesis. Even when the causal interpretation is faulty, it serves as a reference model. Modularity becomes, however, implausible if the concatenation of the description of some CPDs is compressible. Then there might be a kind of meta-mechanism governing some of the mechanisms or either a single mechanism responsible for setting the state of multiple variables.

Cite this Paper


BibTeX
@InProceedings{pmlr-v6-lemeire10a, title = {Inference of Graphical Causal Models: Representing the Meaningful Information of Probability Distributions}, author = {Lemeire, Jan and Steenhaut, Kris}, booktitle = {Proceedings of Workshop on Causality: Objectives and Assessment at NIPS 2008}, pages = {107--120}, year = {2010}, editor = {Guyon, Isabelle and Janzing, Dominik and Schölkopf, Bernhard}, volume = {6}, series = {Proceedings of Machine Learning Research}, address = {Whistler, Canada}, month = {12 Dec}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v6/lemeire10a/lemeire10a.pdf}, url = {https://proceedings.mlr.press/v6/lemeire10a.html}, abstract = {This paper studies the feasibility and interpretation of learning the causal structure from observational data with the principles behind the Kolmogorov Minimal Sufficient Statistic (KMSS). The KMSS provides a generic solution to inductive inference. It states that we should seek for the minimal model that captures all regularities of the data. The conditional independencies following from the system’s causal structure are the regularities incorporated in a graphical causal model. The meaningful information provided by a Bayesian network corresponds to the decomposition of the description of the system into Conditional Probability Distributions (CPDs). The decomposition is described by the Directed Acyclic Graph (DAG). For a causal interpretation of the DAG, the decomposition should imply modularity of the CPDs. The CPDs should match up with independent parts of reality that can be changed independently. We argue that if the shortest description of the joint distribution is given by separate descriptions of the conditional distributions for each variable given its effects, the decomposition given by the DAG should be considered as the top-ranked causal hypothesis. Even when the causal interpretation is faulty, it serves as a reference model. Modularity becomes, however, implausible if the concatenation of the description of some CPDs is compressible. Then there might be a kind of meta-mechanism governing some of the mechanisms or either a single mechanism responsible for setting the state of multiple variables.} }
Endnote
%0 Conference Paper %T Inference of Graphical Causal Models: Representing the Meaningful Information of Probability Distributions %A Jan Lemeire %A Kris Steenhaut %B Proceedings of Workshop on Causality: Objectives and Assessment at NIPS 2008 %C Proceedings of Machine Learning Research %D 2010 %E Isabelle Guyon %E Dominik Janzing %E Bernhard Schölkopf %F pmlr-v6-lemeire10a %I PMLR %P 107--120 %U https://proceedings.mlr.press/v6/lemeire10a.html %V 6 %X This paper studies the feasibility and interpretation of learning the causal structure from observational data with the principles behind the Kolmogorov Minimal Sufficient Statistic (KMSS). The KMSS provides a generic solution to inductive inference. It states that we should seek for the minimal model that captures all regularities of the data. The conditional independencies following from the system’s causal structure are the regularities incorporated in a graphical causal model. The meaningful information provided by a Bayesian network corresponds to the decomposition of the description of the system into Conditional Probability Distributions (CPDs). The decomposition is described by the Directed Acyclic Graph (DAG). For a causal interpretation of the DAG, the decomposition should imply modularity of the CPDs. The CPDs should match up with independent parts of reality that can be changed independently. We argue that if the shortest description of the joint distribution is given by separate descriptions of the conditional distributions for each variable given its effects, the decomposition given by the DAG should be considered as the top-ranked causal hypothesis. Even when the causal interpretation is faulty, it serves as a reference model. Modularity becomes, however, implausible if the concatenation of the description of some CPDs is compressible. Then there might be a kind of meta-mechanism governing some of the mechanisms or either a single mechanism responsible for setting the state of multiple variables.
RIS
TY - CPAPER TI - Inference of Graphical Causal Models: Representing the Meaningful Information of Probability Distributions AU - Jan Lemeire AU - Kris Steenhaut BT - Proceedings of Workshop on Causality: Objectives and Assessment at NIPS 2008 DA - 2010/02/18 ED - Isabelle Guyon ED - Dominik Janzing ED - Bernhard Schölkopf ID - pmlr-v6-lemeire10a PB - PMLR DP - Proceedings of Machine Learning Research VL - 6 SP - 107 EP - 120 L1 - http://proceedings.mlr.press/v6/lemeire10a/lemeire10a.pdf UR - https://proceedings.mlr.press/v6/lemeire10a.html AB - This paper studies the feasibility and interpretation of learning the causal structure from observational data with the principles behind the Kolmogorov Minimal Sufficient Statistic (KMSS). The KMSS provides a generic solution to inductive inference. It states that we should seek for the minimal model that captures all regularities of the data. The conditional independencies following from the system’s causal structure are the regularities incorporated in a graphical causal model. The meaningful information provided by a Bayesian network corresponds to the decomposition of the description of the system into Conditional Probability Distributions (CPDs). The decomposition is described by the Directed Acyclic Graph (DAG). For a causal interpretation of the DAG, the decomposition should imply modularity of the CPDs. The CPDs should match up with independent parts of reality that can be changed independently. We argue that if the shortest description of the joint distribution is given by separate descriptions of the conditional distributions for each variable given its effects, the decomposition given by the DAG should be considered as the top-ranked causal hypothesis. Even when the causal interpretation is faulty, it serves as a reference model. Modularity becomes, however, implausible if the concatenation of the description of some CPDs is compressible. Then there might be a kind of meta-mechanism governing some of the mechanisms or either a single mechanism responsible for setting the state of multiple variables. ER -
APA
Lemeire, J. & Steenhaut, K.. (2010). Inference of Graphical Causal Models: Representing the Meaningful Information of Probability Distributions. Proceedings of Workshop on Causality: Objectives and Assessment at NIPS 2008, in Proceedings of Machine Learning Research 6:107-120 Available from https://proceedings.mlr.press/v6/lemeire10a.html.

Related Material