Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling

Yao Liu, Pierre-Luc Bacon, Emma Brunskill
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6184-6193, 2020.

Abstract

Off-policy policy estimators that use importance sampling (IS) can suffer from high variance in long-horizon domains, and there has been particular excitement over new IS methods that leverage the structure of Markov decision processes. We analyze the variance of the most popular approaches through the viewpoint of conditional Monte Carlo. Surprisingly, we find that in finite horizon MDPs there is no strict variance reduction of per-decision importance sampling or marginalized importance sampling, comparing with vanilla importance sampling. We then provide sufficient conditions under which the per-decision or marginalized estimators will provably reduce the variance over importance sampling with finite horizons. For the asymptotic (in terms of horizon $T$) case, we develop upper and lower bounds on the variance of those estimators which yields sufficient conditions under which there exists an exponential v.s. polynomial gap between the variance of importance sampling and that of the per-decision or stationary/marginalized estimators. These results help advance our understanding of if and when new types of IS estimators will improve the accuracy of off-policy estimation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-liu20a, title = {Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling}, author = {Liu, Yao and Bacon, Pierre-Luc and Brunskill, Emma}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6184--6193}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/liu20a/liu20a.pdf}, url = {https://proceedings.mlr.press/v119/liu20a.html}, abstract = {Off-policy policy estimators that use importance sampling (IS) can suffer from high variance in long-horizon domains, and there has been particular excitement over new IS methods that leverage the structure of Markov decision processes. We analyze the variance of the most popular approaches through the viewpoint of conditional Monte Carlo. Surprisingly, we find that in finite horizon MDPs there is no strict variance reduction of per-decision importance sampling or marginalized importance sampling, comparing with vanilla importance sampling. We then provide sufficient conditions under which the per-decision or marginalized estimators will provably reduce the variance over importance sampling with finite horizons. For the asymptotic (in terms of horizon $T$) case, we develop upper and lower bounds on the variance of those estimators which yields sufficient conditions under which there exists an exponential v.s. polynomial gap between the variance of importance sampling and that of the per-decision or stationary/marginalized estimators. These results help advance our understanding of if and when new types of IS estimators will improve the accuracy of off-policy estimation.} }
Endnote
%0 Conference Paper %T Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling %A Yao Liu %A Pierre-Luc Bacon %A Emma Brunskill %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-liu20a %I PMLR %P 6184--6193 %U https://proceedings.mlr.press/v119/liu20a.html %V 119 %X Off-policy policy estimators that use importance sampling (IS) can suffer from high variance in long-horizon domains, and there has been particular excitement over new IS methods that leverage the structure of Markov decision processes. We analyze the variance of the most popular approaches through the viewpoint of conditional Monte Carlo. Surprisingly, we find that in finite horizon MDPs there is no strict variance reduction of per-decision importance sampling or marginalized importance sampling, comparing with vanilla importance sampling. We then provide sufficient conditions under which the per-decision or marginalized estimators will provably reduce the variance over importance sampling with finite horizons. For the asymptotic (in terms of horizon $T$) case, we develop upper and lower bounds on the variance of those estimators which yields sufficient conditions under which there exists an exponential v.s. polynomial gap between the variance of importance sampling and that of the per-decision or stationary/marginalized estimators. These results help advance our understanding of if and when new types of IS estimators will improve the accuracy of off-policy estimation.
APA
Liu, Y., Bacon, P. & Brunskill, E.. (2020). Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6184-6193 Available from https://proceedings.mlr.press/v119/liu20a.html.

Related Material