On the Identifiability of Causal Abstractions

Xiusi Li, Sékou-Oumar Kaba, Siamak Ravanbakhsh
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3241-3249, 2025.

Abstract

Causal representation learning (CRL) enhances machine learning models’ robustness and generalizability by learning structural causal models associated with data-generating processes. We focus on a family of CRL methods that uses contrastive data pairs in the observable space, generated before and after a random, unknown intervention, to identify the latent causal model. (Brehmer et al., 2022) showed that this is indeed possible, given that all latent variables can be intervened on individually. However, this is a highly restrictive assumption in many systems. In this work, we instead assume interventions on arbitrary subsets of latent variables, which is more realistic. We introduce a theoretical framework that calculates the degree to which we can identify a causal model, given a set of possible interventions, up to an abstraction that describes the system at a higher level of granularity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-li25g, title = {On the Identifiability of Causal Abstractions}, author = {Li, Xiusi and Kaba, S{\'e}kou-Oumar and Ravanbakhsh, Siamak}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3241--3249}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/li25g/li25g.pdf}, url = {https://proceedings.mlr.press/v258/li25g.html}, abstract = {Causal representation learning (CRL) enhances machine learning models’ robustness and generalizability by learning structural causal models associated with data-generating processes. We focus on a family of CRL methods that uses contrastive data pairs in the observable space, generated before and after a random, unknown intervention, to identify the latent causal model. (Brehmer et al., 2022) showed that this is indeed possible, given that all latent variables can be intervened on individually. However, this is a highly restrictive assumption in many systems. In this work, we instead assume interventions on arbitrary subsets of latent variables, which is more realistic. We introduce a theoretical framework that calculates the degree to which we can identify a causal model, given a set of possible interventions, up to an abstraction that describes the system at a higher level of granularity.} }
Endnote
%0 Conference Paper %T On the Identifiability of Causal Abstractions %A Xiusi Li %A Sékou-Oumar Kaba %A Siamak Ravanbakhsh %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-li25g %I PMLR %P 3241--3249 %U https://proceedings.mlr.press/v258/li25g.html %V 258 %X Causal representation learning (CRL) enhances machine learning models’ robustness and generalizability by learning structural causal models associated with data-generating processes. We focus on a family of CRL methods that uses contrastive data pairs in the observable space, generated before and after a random, unknown intervention, to identify the latent causal model. (Brehmer et al., 2022) showed that this is indeed possible, given that all latent variables can be intervened on individually. However, this is a highly restrictive assumption in many systems. In this work, we instead assume interventions on arbitrary subsets of latent variables, which is more realistic. We introduce a theoretical framework that calculates the degree to which we can identify a causal model, given a set of possible interventions, up to an abstraction that describes the system at a higher level of granularity.
APA
Li, X., Kaba, S. & Ravanbakhsh, S.. (2025). On the Identifiability of Causal Abstractions. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3241-3249 Available from https://proceedings.mlr.press/v258/li25g.html.

Related Material