Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning

Philip Thomas, Emma Brunskill
; Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2139-2148, 2016.

Abstract

In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-thomasa16, title = {Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning}, author = {Philip Thomas and Emma Brunskill}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2139--2148}, year = {2016}, editor = {Maria Florina Balcan and Kilian Q. Weinberger}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/thomasa16.pdf}, url = {http://proceedings.mlr.press/v48/thomasa16.html}, abstract = {In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates.} }
Endnote
%0 Conference Paper %T Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning %A Philip Thomas %A Emma Brunskill %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-thomasa16 %I PMLR %J Proceedings of Machine Learning Research %P 2139--2148 %U http://proceedings.mlr.press %V 48 %W PMLR %X In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates.
RIS
TY - CPAPER TI - Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning AU - Philip Thomas AU - Emma Brunskill BT - Proceedings of The 33rd International Conference on Machine Learning PY - 2016/06/11 DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-thomasa16 PB - PMLR SP - 2139 DP - PMLR EP - 2148 L1 - http://proceedings.mlr.press/v48/thomasa16.pdf UR - http://proceedings.mlr.press/v48/thomasa16.html AB - In this paper we present a new way of predicting the performance of a reinforcement learning policy given historical data that may have been generated by a different policy. The ability to evaluate a policy from historical data is important for applications where the deployment of a bad policy can be dangerous or costly. We show empirically that our algorithm produces estimates that often have orders of magnitude lower mean squared error than existing methods—it makes more efficient use of the available data. Our new estimator is based on two advances: an extension of the doubly robust estimator (Jiang & Li, 2015), and a new way to mix between model based and importance sampling based estimates. ER -
APA
Thomas, P. & Brunskill, E.. (2016). Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning. Proceedings of The 33rd International Conference on Machine Learning, in PMLR 48:2139-2148

Related Material