Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms

Rickard Karlsson, Jh Krijthe
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:29128-29147, 2025.

Abstract

A major challenge in estimating treatment effects in observational studies is the reliance on untestable conditions such as the assumption of no unmeasured confounding. In this work, we propose an algorithm that can falsify the assumption of no unmeasured confounding in a setting with observational data from multiple heterogeneous sources, which we refer to as environments. Our proposed falsification strategy leverages a key observation that unmeasured confounding can cause observed causal mechanisms to appear dependent. Building on this observation, we develop a novel two-stage procedure that detects these dependencies with high statistical power while controlling false positives. The algorithm does not require access to randomized data and, in contrast to other falsification approaches, functions even under transportability violations when the environment has a direct effect on the outcome of interest. To showcase the practical relevance of our approach, we show that our method is able to efficiently detect confounding on both simulated and semi-synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-karlsson25a, title = {Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms}, author = {Karlsson, Rickard and Krijthe, Jh}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {29128--29147}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/karlsson25a/karlsson25a.pdf}, url = {https://proceedings.mlr.press/v267/karlsson25a.html}, abstract = {A major challenge in estimating treatment effects in observational studies is the reliance on untestable conditions such as the assumption of no unmeasured confounding. In this work, we propose an algorithm that can falsify the assumption of no unmeasured confounding in a setting with observational data from multiple heterogeneous sources, which we refer to as environments. Our proposed falsification strategy leverages a key observation that unmeasured confounding can cause observed causal mechanisms to appear dependent. Building on this observation, we develop a novel two-stage procedure that detects these dependencies with high statistical power while controlling false positives. The algorithm does not require access to randomized data and, in contrast to other falsification approaches, functions even under transportability violations when the environment has a direct effect on the outcome of interest. To showcase the practical relevance of our approach, we show that our method is able to efficiently detect confounding on both simulated and semi-synthetic data.} }
Endnote
%0 Conference Paper %T Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms %A Rickard Karlsson %A Jh Krijthe %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-karlsson25a %I PMLR %P 29128--29147 %U https://proceedings.mlr.press/v267/karlsson25a.html %V 267 %X A major challenge in estimating treatment effects in observational studies is the reliance on untestable conditions such as the assumption of no unmeasured confounding. In this work, we propose an algorithm that can falsify the assumption of no unmeasured confounding in a setting with observational data from multiple heterogeneous sources, which we refer to as environments. Our proposed falsification strategy leverages a key observation that unmeasured confounding can cause observed causal mechanisms to appear dependent. Building on this observation, we develop a novel two-stage procedure that detects these dependencies with high statistical power while controlling false positives. The algorithm does not require access to randomized data and, in contrast to other falsification approaches, functions even under transportability violations when the environment has a direct effect on the outcome of interest. To showcase the practical relevance of our approach, we show that our method is able to efficiently detect confounding on both simulated and semi-synthetic data.
APA
Karlsson, R. & Krijthe, J.. (2025). Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:29128-29147 Available from https://proceedings.mlr.press/v267/karlsson25a.html.

Related Material