Doubly Robust Off-policy Value Evaluation for Reinforcement Learning

Nan Jiang, Lihong Li
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:652-661, 2016.

Abstract

We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator’s accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-jiang16, title = {Doubly Robust Off-policy Value Evaluation for Reinforcement Learning}, author = {Jiang, Nan and Li, Lihong}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {652--661}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/jiang16.pdf}, url = {https://proceedings.mlr.press/v48/jiang16.html}, abstract = {We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator’s accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.} }
Endnote
%0 Conference Paper %T Doubly Robust Off-policy Value Evaluation for Reinforcement Learning %A Nan Jiang %A Lihong Li %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-jiang16 %I PMLR %P 652--661 %U https://proceedings.mlr.press/v48/jiang16.html %V 48 %X We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator’s accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios.
RIS
TY - CPAPER TI - Doubly Robust Off-policy Value Evaluation for Reinforcement Learning AU - Nan Jiang AU - Lihong Li BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-jiang16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 652 EP - 661 L1 - http://proceedings.mlr.press/v48/jiang16.pdf UR - https://proceedings.mlr.press/v48/jiang16.html AB - We study the problem of off-policy value evaluation in reinforcement learning (RL), where one aims to estimate the value of a new policy based on data collected by a different policy. This problem is often a critical step when applying RL to real-world problems. Despite its importance, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and can have a much lower variance than the popular importance sampling estimators. We demonstrate the estimator’s accuracy in several benchmark problems, and illustrate its use as a subroutine in safe policy improvement. We also provide theoretical results on the inherent hardness of the problem, and show that our estimator can match the lower bound in certain scenarios. ER -
APA
Jiang, N. & Li, L.. (2016). Doubly Robust Off-policy Value Evaluation for Reinforcement Learning. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:652-661 Available from https://proceedings.mlr.press/v48/jiang16.html.

Related Material