Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders

Andrew Bennett, Nathan Kallus, Lihong Li, Ali Mousavi
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1999-2007, 2021.

Abstract

Off-policy evaluation (OPE) in reinforcement learning is an important problem in settings where experimentation is limited, such as healthcare. But, in these very same settings, observed actions are often confounded by unobserved variables making OPE even more difficult. We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders, where states and actions can act as proxies for the unobserved confounders. We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data. Our method involves two stages. In the first, we show how to use proxies to estimate stationary distribution ratios, extending recent work on breaking the curse of horizon to the confounded setting. In the second, we show optimal balancing can be combined with such learned ratios to obtain policy value while avoiding direct modeling of reward functions. We establish theoretical guarantees of consistency and benchmark our method empirically.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-bennett21a, title = { Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders }, author = {Bennett, Andrew and Kallus, Nathan and Li, Lihong and Mousavi, Ali}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1999--2007}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/bennett21a/bennett21a.pdf}, url = {https://proceedings.mlr.press/v130/bennett21a.html}, abstract = { Off-policy evaluation (OPE) in reinforcement learning is an important problem in settings where experimentation is limited, such as healthcare. But, in these very same settings, observed actions are often confounded by unobserved variables making OPE even more difficult. We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders, where states and actions can act as proxies for the unobserved confounders. We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data. Our method involves two stages. In the first, we show how to use proxies to estimate stationary distribution ratios, extending recent work on breaking the curse of horizon to the confounded setting. In the second, we show optimal balancing can be combined with such learned ratios to obtain policy value while avoiding direct modeling of reward functions. We establish theoretical guarantees of consistency and benchmark our method empirically. } }
Endnote
%0 Conference Paper %T Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders %A Andrew Bennett %A Nathan Kallus %A Lihong Li %A Ali Mousavi %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-bennett21a %I PMLR %P 1999--2007 %U https://proceedings.mlr.press/v130/bennett21a.html %V 130 %X Off-policy evaluation (OPE) in reinforcement learning is an important problem in settings where experimentation is limited, such as healthcare. But, in these very same settings, observed actions are often confounded by unobserved variables making OPE even more difficult. We study an OPE problem in an infinite-horizon, ergodic Markov decision process with unobserved confounders, where states and actions can act as proxies for the unobserved confounders. We show how, given only a latent variable model for states and actions, policy value can be identified from off-policy data. Our method involves two stages. In the first, we show how to use proxies to estimate stationary distribution ratios, extending recent work on breaking the curse of horizon to the confounded setting. In the second, we show optimal balancing can be combined with such learned ratios to obtain policy value while avoiding direct modeling of reward functions. We establish theoretical guarantees of consistency and benchmark our method empirically.
APA
Bennett, A., Kallus, N., Li, L. & Mousavi, A.. (2021). Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1999-2007 Available from https://proceedings.mlr.press/v130/bennett21a.html.

Related Material