Quantifying intrinsic causal contributions via structure preserving interventions

Dominik Janzing, Patrick Blöbaum, Atalanti A Mastakouri, Philipp M Faller, Lenon Minorics, Kailash Budhathoki
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2188-2196, 2024.

Abstract

We propose a notion of causal influence that describes the ‘intrinsic’ part of the contribution of a node on a target node in a DAG. By recursively writing each node as a function of the upstream noise terms, we separate the intrinsic information added by each node from the one obtained from its ancestors. To interpret the intrinsic information as a causal contribution, we consider ‘structure-preserving interventions’ that randomize each node in a way that mimics the usual dependence on the parents and does not perturb the observed joint distribution. To get a measure that is invariant across arbitrary orderings of nodes we use Shapley based symmetrization and show that it reduces in the linear case to simple ANOVA after resolving the target node into noise variables. We describe our contribution analysis for variance and entropy, but contributions for other target metrics can be defined analogously.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-janzing24a, title = {Quantifying intrinsic causal contributions via structure preserving interventions}, author = {Janzing, Dominik and Bl\"{o}baum, Patrick and A Mastakouri, Atalanti and M Faller, Philipp and Minorics, Lenon and Budhathoki, Kailash}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2188--2196}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/janzing24a/janzing24a.pdf}, url = {https://proceedings.mlr.press/v238/janzing24a.html}, abstract = {We propose a notion of causal influence that describes the ‘intrinsic’ part of the contribution of a node on a target node in a DAG. By recursively writing each node as a function of the upstream noise terms, we separate the intrinsic information added by each node from the one obtained from its ancestors. To interpret the intrinsic information as a causal contribution, we consider ‘structure-preserving interventions’ that randomize each node in a way that mimics the usual dependence on the parents and does not perturb the observed joint distribution. To get a measure that is invariant across arbitrary orderings of nodes we use Shapley based symmetrization and show that it reduces in the linear case to simple ANOVA after resolving the target node into noise variables. We describe our contribution analysis for variance and entropy, but contributions for other target metrics can be defined analogously.} }
Endnote
%0 Conference Paper %T Quantifying intrinsic causal contributions via structure preserving interventions %A Dominik Janzing %A Patrick Blöbaum %A Atalanti A Mastakouri %A Philipp M Faller %A Lenon Minorics %A Kailash Budhathoki %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-janzing24a %I PMLR %P 2188--2196 %U https://proceedings.mlr.press/v238/janzing24a.html %V 238 %X We propose a notion of causal influence that describes the ‘intrinsic’ part of the contribution of a node on a target node in a DAG. By recursively writing each node as a function of the upstream noise terms, we separate the intrinsic information added by each node from the one obtained from its ancestors. To interpret the intrinsic information as a causal contribution, we consider ‘structure-preserving interventions’ that randomize each node in a way that mimics the usual dependence on the parents and does not perturb the observed joint distribution. To get a measure that is invariant across arbitrary orderings of nodes we use Shapley based symmetrization and show that it reduces in the linear case to simple ANOVA after resolving the target node into noise variables. We describe our contribution analysis for variance and entropy, but contributions for other target metrics can be defined analogously.
APA
Janzing, D., Blöbaum, P., A Mastakouri, A., M Faller, P., Minorics, L. & Budhathoki, K.. (2024). Quantifying intrinsic causal contributions via structure preserving interventions. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2188-2196 Available from https://proceedings.mlr.press/v238/janzing24a.html.

Related Material