Policy Learning and Evaluation with Randomized Quasi-Monte Carlo

Sébastien M. R. Arnold, Pierre L’Ecuyer, Liyu Chen, Yi-Fan Chen, Fei Sha
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:1041-1061, 2022.

Abstract

Hard integrals arise frequently in reinforcement learning, for example when computing expectations in policy evaluation and policy iteration. They are often analytically intractable and typically estimated with Monte Carlo methods, whose sampling contributes to high variance in policy values and gradients. In this work, we propose to replace Monte Carlo samples with low-discrepancy point sets. We combine policy gradient methods with Randomized Quasi-Monte Carlo, yielding variance-reduced formulations of policy gradient and actor-critic algorithms. These formulations are effective for policy evaluation and policy improvement, as they outperform state-of-the-art algorithms on standardized continuous control benchmarks. Our empirical analyses validate the intuition that replacing Monte Carlo with Quasi-Monte Carlo yields significantly more accurate gradient estimates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-arnold22a, title = { Policy Learning and Evaluation with Randomized Quasi-Monte Carlo }, author = {Arnold, S\'ebastien M. R. and L'Ecuyer, Pierre and Chen, Liyu and Chen, Yi-Fan and Sha, Fei}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {1041--1061}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/arnold22a/arnold22a.pdf}, url = {https://proceedings.mlr.press/v151/arnold22a.html}, abstract = { Hard integrals arise frequently in reinforcement learning, for example when computing expectations in policy evaluation and policy iteration. They are often analytically intractable and typically estimated with Monte Carlo methods, whose sampling contributes to high variance in policy values and gradients. In this work, we propose to replace Monte Carlo samples with low-discrepancy point sets. We combine policy gradient methods with Randomized Quasi-Monte Carlo, yielding variance-reduced formulations of policy gradient and actor-critic algorithms. These formulations are effective for policy evaluation and policy improvement, as they outperform state-of-the-art algorithms on standardized continuous control benchmarks. Our empirical analyses validate the intuition that replacing Monte Carlo with Quasi-Monte Carlo yields significantly more accurate gradient estimates. } }
Endnote
%0 Conference Paper %T Policy Learning and Evaluation with Randomized Quasi-Monte Carlo %A Sébastien M. R. Arnold %A Pierre L’Ecuyer %A Liyu Chen %A Yi-Fan Chen %A Fei Sha %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-arnold22a %I PMLR %P 1041--1061 %U https://proceedings.mlr.press/v151/arnold22a.html %V 151 %X Hard integrals arise frequently in reinforcement learning, for example when computing expectations in policy evaluation and policy iteration. They are often analytically intractable and typically estimated with Monte Carlo methods, whose sampling contributes to high variance in policy values and gradients. In this work, we propose to replace Monte Carlo samples with low-discrepancy point sets. We combine policy gradient methods with Randomized Quasi-Monte Carlo, yielding variance-reduced formulations of policy gradient and actor-critic algorithms. These formulations are effective for policy evaluation and policy improvement, as they outperform state-of-the-art algorithms on standardized continuous control benchmarks. Our empirical analyses validate the intuition that replacing Monte Carlo with Quasi-Monte Carlo yields significantly more accurate gradient estimates.
APA
Arnold, S.M.R., L’Ecuyer, P., Chen, L., Chen, Y. & Sha, F.. (2022). Policy Learning and Evaluation with Randomized Quasi-Monte Carlo . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:1041-1061 Available from https://proceedings.mlr.press/v151/arnold22a.html.

Related Material