Statistically Efficient Off-Policy Policy Gradients

Nathan Kallus, Masatoshi Uehara
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5089-5100, 2020.

Abstract

Policy gradient methods in reinforcement learning update policy parameters by taking steps in the direction of an estimated gradient of policy value. In this paper, we consider the efficient estimation of policy gradients from off-policy data, where the estimation is particularly non-trivial. We derive the asymptotic lower bound on the feasible mean-squared error in both Markov and non-Markov decision processes and show that existing estimators fail to achieve it in general settings. We propose a meta-algorithm that achieves the lower bound without any parametric assumptions and exhibits a unique 4-way double robustness property. We discuss how to estimate nuisances that the algorithm relies on. Finally, we establish guarantees at the rate at which we approach a stationary point when we take steps in the direction of our new estimated policy gradient.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-kallus20c, title = {Statistically Efficient Off-Policy Policy Gradients}, author = {Kallus, Nathan and Uehara, Masatoshi}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5089--5100}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/kallus20c/kallus20c.pdf}, url = {https://proceedings.mlr.press/v119/kallus20c.html}, abstract = {Policy gradient methods in reinforcement learning update policy parameters by taking steps in the direction of an estimated gradient of policy value. In this paper, we consider the efficient estimation of policy gradients from off-policy data, where the estimation is particularly non-trivial. We derive the asymptotic lower bound on the feasible mean-squared error in both Markov and non-Markov decision processes and show that existing estimators fail to achieve it in general settings. We propose a meta-algorithm that achieves the lower bound without any parametric assumptions and exhibits a unique 4-way double robustness property. We discuss how to estimate nuisances that the algorithm relies on. Finally, we establish guarantees at the rate at which we approach a stationary point when we take steps in the direction of our new estimated policy gradient.} }
Endnote
%0 Conference Paper %T Statistically Efficient Off-Policy Policy Gradients %A Nathan Kallus %A Masatoshi Uehara %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-kallus20c %I PMLR %P 5089--5100 %U https://proceedings.mlr.press/v119/kallus20c.html %V 119 %X Policy gradient methods in reinforcement learning update policy parameters by taking steps in the direction of an estimated gradient of policy value. In this paper, we consider the efficient estimation of policy gradients from off-policy data, where the estimation is particularly non-trivial. We derive the asymptotic lower bound on the feasible mean-squared error in both Markov and non-Markov decision processes and show that existing estimators fail to achieve it in general settings. We propose a meta-algorithm that achieves the lower bound without any parametric assumptions and exhibits a unique 4-way double robustness property. We discuss how to estimate nuisances that the algorithm relies on. Finally, we establish guarantees at the rate at which we approach a stationary point when we take steps in the direction of our new estimated policy gradient.
APA
Kallus, N. & Uehara, M.. (2020). Statistically Efficient Off-Policy Policy Gradients. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5089-5100 Available from https://proceedings.mlr.press/v119/kallus20c.html.

Related Material