Causal Layering via Conditional Entropy

Itai Feigenbaum, Devansh Arpit, Shelby Heinecke, Juan Carlos Niebles, Weiran Yao, Caiming Xiong, Silvio Savarese, Huan Wang
Proceedings of the Third Conference on Causal Learning and Reasoning, PMLR 236:1176-1191, 2024.

Abstract

Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates. Layerings are orderings of the variables which place causes before effects. In this paper, we provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle, when distributions are discrete. Our algorithms work by repeatedly removing sources or sinks from the graph. Under appropriate assumptions and conditioning, we can separate the sources or sinks from the remainder of the nodes by comparing their conditional entropy to the unconditional entropy of their noise. Our algorithms are provably correct and run in worst-case quadratic time. The main assumptions are faithfulness and injective noise, and either known noise entropies or weakly monotonically increasing noise entropies along directed paths. In addition, we require one of either a very mild extension of faithfulness, or strictly monotonically increasing noise entropies, or expanding noise injectivity to include an additional single argument in the structural functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v236-feigenbaum24a, title = {Causal Layering via Conditional Entropy}, author = {Feigenbaum, Itai and Arpit, Devansh and Heinecke, Shelby and Niebles, Juan Carlos and Yao, Weiran and Xiong, Caiming and Savarese, Silvio and Wang, Huan}, booktitle = {Proceedings of the Third Conference on Causal Learning and Reasoning}, pages = {1176--1191}, year = {2024}, editor = {Locatello, Francesco and Didelez, Vanessa}, volume = {236}, series = {Proceedings of Machine Learning Research}, month = {01--03 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v236/feigenbaum24a/feigenbaum24a.pdf}, url = {https://proceedings.mlr.press/v236/feigenbaum24a.html}, abstract = {Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates. Layerings are orderings of the variables which place causes before effects. In this paper, we provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle, when distributions are discrete. Our algorithms work by repeatedly removing sources or sinks from the graph. Under appropriate assumptions and conditioning, we can separate the sources or sinks from the remainder of the nodes by comparing their conditional entropy to the unconditional entropy of their noise. Our algorithms are provably correct and run in worst-case quadratic time. The main assumptions are faithfulness and injective noise, and either known noise entropies or weakly monotonically increasing noise entropies along directed paths. In addition, we require one of either a very mild extension of faithfulness, or strictly monotonically increasing noise entropies, or expanding noise injectivity to include an additional single argument in the structural functions.} }
Endnote
%0 Conference Paper %T Causal Layering via Conditional Entropy %A Itai Feigenbaum %A Devansh Arpit %A Shelby Heinecke %A Juan Carlos Niebles %A Weiran Yao %A Caiming Xiong %A Silvio Savarese %A Huan Wang %B Proceedings of the Third Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2024 %E Francesco Locatello %E Vanessa Didelez %F pmlr-v236-feigenbaum24a %I PMLR %P 1176--1191 %U https://proceedings.mlr.press/v236/feigenbaum24a.html %V 236 %X Causal discovery aims to recover information about an unobserved causal graph from the observable data it generates. Layerings are orderings of the variables which place causes before effects. In this paper, we provide ways to recover layerings of a graph by accessing the data via a conditional entropy oracle, when distributions are discrete. Our algorithms work by repeatedly removing sources or sinks from the graph. Under appropriate assumptions and conditioning, we can separate the sources or sinks from the remainder of the nodes by comparing their conditional entropy to the unconditional entropy of their noise. Our algorithms are provably correct and run in worst-case quadratic time. The main assumptions are faithfulness and injective noise, and either known noise entropies or weakly monotonically increasing noise entropies along directed paths. In addition, we require one of either a very mild extension of faithfulness, or strictly monotonically increasing noise entropies, or expanding noise injectivity to include an additional single argument in the structural functions.
APA
Feigenbaum, I., Arpit, D., Heinecke, S., Niebles, J.C., Yao, W., Xiong, C., Savarese, S. & Wang, H.. (2024). Causal Layering via Conditional Entropy. Proceedings of the Third Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 236:1176-1191 Available from https://proceedings.mlr.press/v236/feigenbaum24a.html.

Related Material