The Causal Information Bottleneck and Optimal Causal Variable Abstractions

Francisco N. F. Q. Simoes, Mehdi Dastani, Thijs van Ommen
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:3878-3897, 2025.

Abstract

To effectively study complex causal systems, it is often useful to construct abstractions of parts of the system by discarding irrelevant details while preserving key features. The Information Bottleneck (IB) method is a widely used approach to construct variable abstractions by compressing random variables while retaining predictive power over a target variable. Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks. We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable. This method produces abstractions of (sets of) variables which are causally interpretable, give us insight about the interactions between the abstracted variables and the target variable, and can be used when reasoning about interventions. We present experimental results demonstrating that the learned abstractions accurately capture causal relations as intended.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-simoes25a, title = {The Causal Information Bottleneck and Optimal Causal Variable Abstractions}, author = {Simoes, Francisco N. F. Q. and Dastani, Mehdi and van Ommen, Thijs}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {3878--3897}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/simoes25a/simoes25a.pdf}, url = {https://proceedings.mlr.press/v286/simoes25a.html}, abstract = {To effectively study complex causal systems, it is often useful to construct abstractions of parts of the system by discarding irrelevant details while preserving key features. The Information Bottleneck (IB) method is a widely used approach to construct variable abstractions by compressing random variables while retaining predictive power over a target variable. Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks. We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable. This method produces abstractions of (sets of) variables which are causally interpretable, give us insight about the interactions between the abstracted variables and the target variable, and can be used when reasoning about interventions. We present experimental results demonstrating that the learned abstractions accurately capture causal relations as intended.} }
Endnote
%0 Conference Paper %T The Causal Information Bottleneck and Optimal Causal Variable Abstractions %A Francisco N. F. Q. Simoes %A Mehdi Dastani %A Thijs van Ommen %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-simoes25a %I PMLR %P 3878--3897 %U https://proceedings.mlr.press/v286/simoes25a.html %V 286 %X To effectively study complex causal systems, it is often useful to construct abstractions of parts of the system by discarding irrelevant details while preserving key features. The Information Bottleneck (IB) method is a widely used approach to construct variable abstractions by compressing random variables while retaining predictive power over a target variable. Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks. We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable. This method produces abstractions of (sets of) variables which are causally interpretable, give us insight about the interactions between the abstracted variables and the target variable, and can be used when reasoning about interventions. We present experimental results demonstrating that the learned abstractions accurately capture causal relations as intended.
APA
Simoes, F.N.F.Q., Dastani, M. & van Ommen, T.. (2025). The Causal Information Bottleneck and Optimal Causal Variable Abstractions. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:3878-3897 Available from https://proceedings.mlr.press/v286/simoes25a.html.

Related Material