Model-Free and Model-Based Policy Evaluation when Causality is Uncertain

David A Bruns-Smith
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1116-1126, 2021.

Abstract

When decision-makers can directly intervene, policy evaluation algorithms give valid causal estimates. In off-policy evaluation (OPE), there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy. These “confounders” will introduce spurious correlations and naive estimates for a new policy will be biased. We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. We demonstrate that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics. Finally, we show that when unobserved confounders are persistent over time, OPE is far more difficult and existing techniques produce extremely conservative bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-bruns-smith21a, title = {Model-Free and Model-Based Policy Evaluation when Causality is Uncertain}, author = {Bruns-Smith, David A}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1116--1126}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/bruns-smith21a/bruns-smith21a.pdf}, url = {https://proceedings.mlr.press/v139/bruns-smith21a.html}, abstract = {When decision-makers can directly intervene, policy evaluation algorithms give valid causal estimates. In off-policy evaluation (OPE), there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy. These “confounders” will introduce spurious correlations and naive estimates for a new policy will be biased. We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. We demonstrate that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics. Finally, we show that when unobserved confounders are persistent over time, OPE is far more difficult and existing techniques produce extremely conservative bounds.} }
Endnote
%0 Conference Paper %T Model-Free and Model-Based Policy Evaluation when Causality is Uncertain %A David A Bruns-Smith %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-bruns-smith21a %I PMLR %P 1116--1126 %U https://proceedings.mlr.press/v139/bruns-smith21a.html %V 139 %X When decision-makers can directly intervene, policy evaluation algorithms give valid causal estimates. In off-policy evaluation (OPE), there may exist unobserved variables that both impact the dynamics and are used by the unknown behavior policy. These “confounders” will introduce spurious correlations and naive estimates for a new policy will be biased. We develop worst-case bounds to assess sensitivity to these unobserved confounders in finite horizons when confounders are drawn iid each period. We demonstrate that a model-based approach with robust MDPs gives sharper lower bounds by exploiting domain knowledge about the dynamics. Finally, we show that when unobserved confounders are persistent over time, OPE is far more difficult and existing techniques produce extremely conservative bounds.
APA
Bruns-Smith, D.A.. (2021). Model-Free and Model-Based Policy Evaluation when Causality is Uncertain. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1116-1126 Available from https://proceedings.mlr.press/v139/bruns-smith21a.html.

Related Material