Stochastic Variance Reduction Methods for Policy Evaluation

Simon S. Du, Jianshu Chen, Lihong Li, Lin Xiao, Dengyong Zhou
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1049-1058, 2017.

Abstract

Policy evaluation is concerned with estimating the value function that predicts long-term values of states under a given policy. It is a crucial step in many reinforcement-learning algorithms. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle-point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-du17a, title = {Stochastic Variance Reduction Methods for Policy Evaluation}, author = {Simon S. Du and Jianshu Chen and Lihong Li and Lin Xiao and Dengyong Zhou}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1049--1058}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/du17a/du17a.pdf}, url = {https://proceedings.mlr.press/v70/du17a.html}, abstract = {Policy evaluation is concerned with estimating the value function that predicts long-term values of states under a given policy. It is a crucial step in many reinforcement-learning algorithms. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle-point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.} }
Endnote
%0 Conference Paper %T Stochastic Variance Reduction Methods for Policy Evaluation %A Simon S. Du %A Jianshu Chen %A Lihong Li %A Lin Xiao %A Dengyong Zhou %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-du17a %I PMLR %P 1049--1058 %U https://proceedings.mlr.press/v70/du17a.html %V 70 %X Policy evaluation is concerned with estimating the value function that predicts long-term values of states under a given policy. It is a crucial step in many reinforcement-learning algorithms. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle-point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.
APA
Du, S.S., Chen, J., Li, L., Xiao, L. & Zhou, D.. (2017). Stochastic Variance Reduction Methods for Policy Evaluation. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1049-1058 Available from https://proceedings.mlr.press/v70/du17a.html.

Related Material