Counterfactual Influence in Markov Decision Processes

Milad Kazemi, Jessica Lally, Ekaterina Tishchenko, Hana Chockler, Nicola Paoletti
Proceedings of the Fourth Conference on Causal Learning and Reasoning, PMLR 275:792-817, 2025.

Abstract

Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs). Given an MDP path $\tau$, counterfactual inference allows us to derive counterfactual paths $\tau’$ describing _what-if_ versions of $\tau$ obtained under different action sequences than those observed in $\tau$. However, as the counterfactual states and actions deviate from the observed ones over time, _the observation $\tau$ may no longer influence the counterfactual world_, meaning that the analysis is no longer tailored to the individual observation, resulting in interventional outcomes rather than counterfactual ones. This issue specifically affects the popular Gumbel-max structural causal model used for MDP counterfactuals, and yet, it has remained overlooked until now. In this work, we introduce a formal characterisation of influence based on comparing counterfactual and interventional distributions. We devise an algorithm to construct counterfactual models that automatically satisfy influence constraints. Leveraging such models, we derive counterfactual policies that are not just optimal for a given reward structure but also remain tailored to the observed path. Even though there is an unavoidable trade-off between policy optimality and strength of influence constraints, our experiments demonstrate that it is possible to derive (near-)optimal policies while remaining under the influence of the observation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v275-kazemi25a, title = {Counterfactual Influence in Markov Decision Processes}, author = {Kazemi, Milad and Lally, Jessica and Tishchenko, Ekaterina and Chockler, Hana and Paoletti, Nicola}, booktitle = {Proceedings of the Fourth Conference on Causal Learning and Reasoning}, pages = {792--817}, year = {2025}, editor = {Huang, Biwei and Drton, Mathias}, volume = {275}, series = {Proceedings of Machine Learning Research}, month = {07--09 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v275/main/assets/kazemi25a/kazemi25a.pdf}, url = {https://proceedings.mlr.press/v275/kazemi25a.html}, abstract = {Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs). Given an MDP path $\tau$, counterfactual inference allows us to derive counterfactual paths $\tau’$ describing _what-if_ versions of $\tau$ obtained under different action sequences than those observed in $\tau$. However, as the counterfactual states and actions deviate from the observed ones over time, _the observation $\tau$ may no longer influence the counterfactual world_, meaning that the analysis is no longer tailored to the individual observation, resulting in interventional outcomes rather than counterfactual ones. This issue specifically affects the popular Gumbel-max structural causal model used for MDP counterfactuals, and yet, it has remained overlooked until now. In this work, we introduce a formal characterisation of influence based on comparing counterfactual and interventional distributions. We devise an algorithm to construct counterfactual models that automatically satisfy influence constraints. Leveraging such models, we derive counterfactual policies that are not just optimal for a given reward structure but also remain tailored to the observed path. Even though there is an unavoidable trade-off between policy optimality and strength of influence constraints, our experiments demonstrate that it is possible to derive (near-)optimal policies while remaining under the influence of the observation.} }
Endnote
%0 Conference Paper %T Counterfactual Influence in Markov Decision Processes %A Milad Kazemi %A Jessica Lally %A Ekaterina Tishchenko %A Hana Chockler %A Nicola Paoletti %B Proceedings of the Fourth Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Biwei Huang %E Mathias Drton %F pmlr-v275-kazemi25a %I PMLR %P 792--817 %U https://proceedings.mlr.press/v275/kazemi25a.html %V 275 %X Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs). Given an MDP path $\tau$, counterfactual inference allows us to derive counterfactual paths $\tau’$ describing _what-if_ versions of $\tau$ obtained under different action sequences than those observed in $\tau$. However, as the counterfactual states and actions deviate from the observed ones over time, _the observation $\tau$ may no longer influence the counterfactual world_, meaning that the analysis is no longer tailored to the individual observation, resulting in interventional outcomes rather than counterfactual ones. This issue specifically affects the popular Gumbel-max structural causal model used for MDP counterfactuals, and yet, it has remained overlooked until now. In this work, we introduce a formal characterisation of influence based on comparing counterfactual and interventional distributions. We devise an algorithm to construct counterfactual models that automatically satisfy influence constraints. Leveraging such models, we derive counterfactual policies that are not just optimal for a given reward structure but also remain tailored to the observed path. Even though there is an unavoidable trade-off between policy optimality and strength of influence constraints, our experiments demonstrate that it is possible to derive (near-)optimal policies while remaining under the influence of the observation.
APA
Kazemi, M., Lally, J., Tishchenko, E., Chockler, H. & Paoletti, N.. (2025). Counterfactual Influence in Markov Decision Processes. Proceedings of the Fourth Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 275:792-817 Available from https://proceedings.mlr.press/v275/kazemi25a.html.

Related Material