Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning

Yuejiang Liu, Alexandre Alahi, Chris Russell, Max Horn, Dominik Zietlow, Bernhard Schölkopf, Francesco Locatello
Proceedings of the Second Conference on Causal Learning and Reasoning, PMLR 213:553-573, 2023.

Abstract

Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions. Yet, existing efforts are largely limited to simple synthetic settings that are far away from real-world problems. In this paper, we present CausalTriplet, a causal representation learning benchmark featuring not only visually more complex scenes, but also two crucial desiderata commonly overlooked in previous works: (i) an actionable counterfactual setting, where only certain (object-level) variables allow for counterfactual observations whereas others do not; (ii) an interventional downstream task with an emphasis on out-of-distribution robustness from the independent causal mechanisms principle. Through extensive experiments, we find that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts. However, recent causal representation learning methods still struggle to identify such latent structures, indicating substantial challenges and opportunities in CausalTriplet. Our code and datasets will be available at https://sites.google.com/view/causaltriplet.

Cite this Paper


BibTeX
@InProceedings{pmlr-v213-liu23a, title = {Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning}, author = {Liu, Yuejiang and Alahi, Alexandre and Russell, Chris and Horn, Max and Zietlow, Dominik and Sch\"olkopf, Bernhard and Locatello, Francesco}, booktitle = {Proceedings of the Second Conference on Causal Learning and Reasoning}, pages = {553--573}, year = {2023}, editor = {van der Schaar, Mihaela and Zhang, Cheng and Janzing, Dominik}, volume = {213}, series = {Proceedings of Machine Learning Research}, month = {11--14 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v213/liu23a/liu23a.pdf}, url = {https://proceedings.mlr.press/v213/liu23a.html}, abstract = {Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions. Yet, existing efforts are largely limited to simple synthetic settings that are far away from real-world problems. In this paper, we present CausalTriplet, a causal representation learning benchmark featuring not only visually more complex scenes, but also two crucial desiderata commonly overlooked in previous works: (i) an actionable counterfactual setting, where only certain (object-level) variables allow for counterfactual observations whereas others do not; (ii) an interventional downstream task with an emphasis on out-of-distribution robustness from the independent causal mechanisms principle. Through extensive experiments, we find that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts. However, recent causal representation learning methods still struggle to identify such latent structures, indicating substantial challenges and opportunities in CausalTriplet. Our code and datasets will be available at https://sites.google.com/view/causaltriplet.} }
Endnote
%0 Conference Paper %T Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning %A Yuejiang Liu %A Alexandre Alahi %A Chris Russell %A Max Horn %A Dominik Zietlow %A Bernhard Schölkopf %A Francesco Locatello %B Proceedings of the Second Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2023 %E Mihaela van der Schaar %E Cheng Zhang %E Dominik Janzing %F pmlr-v213-liu23a %I PMLR %P 553--573 %U https://proceedings.mlr.press/v213/liu23a.html %V 213 %X Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions. Yet, existing efforts are largely limited to simple synthetic settings that are far away from real-world problems. In this paper, we present CausalTriplet, a causal representation learning benchmark featuring not only visually more complex scenes, but also two crucial desiderata commonly overlooked in previous works: (i) an actionable counterfactual setting, where only certain (object-level) variables allow for counterfactual observations whereas others do not; (ii) an interventional downstream task with an emphasis on out-of-distribution robustness from the independent causal mechanisms principle. Through extensive experiments, we find that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts. However, recent causal representation learning methods still struggle to identify such latent structures, indicating substantial challenges and opportunities in CausalTriplet. Our code and datasets will be available at https://sites.google.com/view/causaltriplet.
APA
Liu, Y., Alahi, A., Russell, C., Horn, M., Zietlow, D., Schölkopf, B. & Locatello, F.. (2023). Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning. Proceedings of the Second Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 213:553-573 Available from https://proceedings.mlr.press/v213/liu23a.html.

Related Material