Ensembled Prediction Intervals for Causal Outcomes Under Hidden Confounding

Myrl G. Marmarelis, Greg Ver Steeg, Aram Galstyan, Fred Morstatter
Proceedings of the Third Conference on Causal Learning and Reasoning, PMLR 236:18-40, 2024.

Abstract

Causal inference of exact individual treatment outcomes in the presence of hidden confounders is rarely possible. Recent work has extended prediction intervals with finite-sample guarantees to partially identifiable causal outcomes, by means of a sensitivity model for hidden confounding. In deep learning, predictors can exploit their inductive biases for better generalization out of sample. We argue that the structure inherent to a deep ensemble should inform a tighter partial identification of the causal outcomes that they predict. We therefore introduce an approach termed Caus-Modens, for characterizing causal outcome intervals by modulated ensembles. We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals, as measured by the necessary interval size to achieve sufficient coverage. The last of our three diverse benchmarks is a novel usage of GPT-4 for observational experiments with unknown but probeable ground truth.

Cite this Paper


BibTeX
@InProceedings{pmlr-v236-marmarelis24a, title = {Ensembled Prediction Intervals for Causal Outcomes Under Hidden Confounding}, author = {Marmarelis, Myrl G. and Steeg, Greg Ver and Galstyan, Aram and Morstatter, Fred}, booktitle = {Proceedings of the Third Conference on Causal Learning and Reasoning}, pages = {18--40}, year = {2024}, editor = {Locatello, Francesco and Didelez, Vanessa}, volume = {236}, series = {Proceedings of Machine Learning Research}, month = {01--03 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v236/marmarelis24a/marmarelis24a.pdf}, url = {https://proceedings.mlr.press/v236/marmarelis24a.html}, abstract = {Causal inference of exact individual treatment outcomes in the presence of hidden confounders is rarely possible. Recent work has extended prediction intervals with finite-sample guarantees to partially identifiable causal outcomes, by means of a sensitivity model for hidden confounding. In deep learning, predictors can exploit their inductive biases for better generalization out of sample. We argue that the structure inherent to a deep ensemble should inform a tighter partial identification of the causal outcomes that they predict. We therefore introduce an approach termed Caus-Modens, for characterizing causal outcome intervals by modulated ensembles. We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals, as measured by the necessary interval size to achieve sufficient coverage. The last of our three diverse benchmarks is a novel usage of GPT-4 for observational experiments with unknown but probeable ground truth.} }
Endnote
%0 Conference Paper %T Ensembled Prediction Intervals for Causal Outcomes Under Hidden Confounding %A Myrl G. Marmarelis %A Greg Ver Steeg %A Aram Galstyan %A Fred Morstatter %B Proceedings of the Third Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2024 %E Francesco Locatello %E Vanessa Didelez %F pmlr-v236-marmarelis24a %I PMLR %P 18--40 %U https://proceedings.mlr.press/v236/marmarelis24a.html %V 236 %X Causal inference of exact individual treatment outcomes in the presence of hidden confounders is rarely possible. Recent work has extended prediction intervals with finite-sample guarantees to partially identifiable causal outcomes, by means of a sensitivity model for hidden confounding. In deep learning, predictors can exploit their inductive biases for better generalization out of sample. We argue that the structure inherent to a deep ensemble should inform a tighter partial identification of the causal outcomes that they predict. We therefore introduce an approach termed Caus-Modens, for characterizing causal outcome intervals by modulated ensembles. We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals, as measured by the necessary interval size to achieve sufficient coverage. The last of our three diverse benchmarks is a novel usage of GPT-4 for observational experiments with unknown but probeable ground truth.
APA
Marmarelis, M.G., Ver Steeg, G., Galstyan, A. & Morstatter, F.. (2024). Ensembled Prediction Intervals for Causal Outcomes Under Hidden Confounding. Proceedings of the Third Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 236:18-40 Available from https://proceedings.mlr.press/v236/marmarelis24a.html.

Related Material