On the Interventional Kullback-Leibler Divergence

Jonas Bernhard Wildberger, Siyuan Guo, Arnab Bhattacharyya, Bernhard Schölkopf
Proceedings of the Second Conference on Causal Learning and Reasoning, PMLR 213:328-349, 2023.

Abstract

Modern machine learning approaches excel in static settings where a large amount of i.i.d. training data are available for a given task. In a dynamic environment though, an intelligent agent needs to be able to transfer knowledge and re-use learned components across domains. It has been argued that this may be possible through causal models, aiming to mirror the modularity of the real world in terms of independent causal mechanisms. However, the true causal structure underlying a given set of data is generally not identifiable, so it is desirable to have means to quantify differences between models (e.g., between the ground truth and an estimate), on both the observational and interventional level. In the present work, we introduce the Interventional Kullback-Leibler (IKL) divergence to quantify both structural and distributional differences between models based on a finite set of multi-environment distributions generated by interventions from the ground truth. Since we generally cannot quantify all differences between causal models for every finite set of interventional distributions, we propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.

Cite this Paper


BibTeX
@InProceedings{pmlr-v213-wildberger23a, title = {On the Interventional Kullback-Leibler Divergence}, author = {Wildberger, Jonas Bernhard and Guo, Siyuan and Bhattacharyya, Arnab and Sch\"olkopf, Bernhard}, booktitle = {Proceedings of the Second Conference on Causal Learning and Reasoning}, pages = {328--349}, year = {2023}, editor = {van der Schaar, Mihaela and Zhang, Cheng and Janzing, Dominik}, volume = {213}, series = {Proceedings of Machine Learning Research}, month = {11--14 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v213/wildberger23a/wildberger23a.pdf}, url = {https://proceedings.mlr.press/v213/wildberger23a.html}, abstract = {Modern machine learning approaches excel in static settings where a large amount of i.i.d. training data are available for a given task. In a dynamic environment though, an intelligent agent needs to be able to transfer knowledge and re-use learned components across domains. It has been argued that this may be possible through causal models, aiming to mirror the modularity of the real world in terms of independent causal mechanisms. However, the true causal structure underlying a given set of data is generally not identifiable, so it is desirable to have means to quantify differences between models (e.g., between the ground truth and an estimate), on both the observational and interventional level. In the present work, we introduce the Interventional Kullback-Leibler (IKL) divergence to quantify both structural and distributional differences between models based on a finite set of multi-environment distributions generated by interventions from the ground truth. Since we generally cannot quantify all differences between causal models for every finite set of interventional distributions, we propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.} }
Endnote
%0 Conference Paper %T On the Interventional Kullback-Leibler Divergence %A Jonas Bernhard Wildberger %A Siyuan Guo %A Arnab Bhattacharyya %A Bernhard Schölkopf %B Proceedings of the Second Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2023 %E Mihaela van der Schaar %E Cheng Zhang %E Dominik Janzing %F pmlr-v213-wildberger23a %I PMLR %P 328--349 %U https://proceedings.mlr.press/v213/wildberger23a.html %V 213 %X Modern machine learning approaches excel in static settings where a large amount of i.i.d. training data are available for a given task. In a dynamic environment though, an intelligent agent needs to be able to transfer knowledge and re-use learned components across domains. It has been argued that this may be possible through causal models, aiming to mirror the modularity of the real world in terms of independent causal mechanisms. However, the true causal structure underlying a given set of data is generally not identifiable, so it is desirable to have means to quantify differences between models (e.g., between the ground truth and an estimate), on both the observational and interventional level. In the present work, we introduce the Interventional Kullback-Leibler (IKL) divergence to quantify both structural and distributional differences between models based on a finite set of multi-environment distributions generated by interventions from the ground truth. Since we generally cannot quantify all differences between causal models for every finite set of interventional distributions, we propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.
APA
Wildberger, J.B., Guo, S., Bhattacharyya, A. & Schölkopf, B.. (2023). On the Interventional Kullback-Leibler Divergence. Proceedings of the Second Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 213:328-349 Available from https://proceedings.mlr.press/v213/wildberger23a.html.

Related Material