Borrowing From the Future: Addressing Double Sampling in Model-free Control

Yuhua Zhu, Zachary Izzo, Lexing Ying
Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, PMLR 145:1099-1136, 2022.

Abstract

In model-free reinforcement learning, the temporal difference method is an important algorithm but might become unstable when combined with nonlinear function approximations. Bellman residual minimization with stochastic gradient descent (SGD) is stable but suffers from the double sampling problem: given the current state, two independent samples for the next state are required, but often only one sample is available. Recently, the borrowing-from-the-future (BFF) algorithm was intro- duced in (Zhu et al., 2020) to address this issue for policy evaluation. The main idea is to borrow extra randomness from the future to approximately re-sample the next state when the underlying dynamics of the problem are sufficiently smooth. This paper extends the BFF algorithm to action- value function based model-free control. We prove that BFF is close to unbiased SGD when the underlying dynamics vary slowly with respect to actions. We confirm our theoretical findings with numerical simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v145-zhu22a, title = {Borrowing From the Future: Addressing Double Sampling in Model-free Control}, author = {Zhu, Yuhua and Izzo, Zachary and Ying, Lexing}, booktitle = {Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference}, pages = {1099--1136}, year = {2022}, editor = {Bruna, Joan and Hesthaven, Jan and Zdeborova, Lenka}, volume = {145}, series = {Proceedings of Machine Learning Research}, month = {16--19 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v145/zhu22a/zhu22a.pdf}, url = {https://proceedings.mlr.press/v145/zhu22a.html}, abstract = {In model-free reinforcement learning, the temporal difference method is an important algorithm but might become unstable when combined with nonlinear function approximations. Bellman residual minimization with stochastic gradient descent (SGD) is stable but suffers from the double sampling problem: given the current state, two independent samples for the next state are required, but often only one sample is available. Recently, the borrowing-from-the-future (BFF) algorithm was intro- duced in (Zhu et al., 2020) to address this issue for policy evaluation. The main idea is to borrow extra randomness from the future to approximately re-sample the next state when the underlying dynamics of the problem are sufficiently smooth. This paper extends the BFF algorithm to action- value function based model-free control. We prove that BFF is close to unbiased SGD when the underlying dynamics vary slowly with respect to actions. We confirm our theoretical findings with numerical simulations. } }
Endnote
%0 Conference Paper %T Borrowing From the Future: Addressing Double Sampling in Model-free Control %A Yuhua Zhu %A Zachary Izzo %A Lexing Ying %B Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2022 %E Joan Bruna %E Jan Hesthaven %E Lenka Zdeborova %F pmlr-v145-zhu22a %I PMLR %P 1099--1136 %U https://proceedings.mlr.press/v145/zhu22a.html %V 145 %X In model-free reinforcement learning, the temporal difference method is an important algorithm but might become unstable when combined with nonlinear function approximations. Bellman residual minimization with stochastic gradient descent (SGD) is stable but suffers from the double sampling problem: given the current state, two independent samples for the next state are required, but often only one sample is available. Recently, the borrowing-from-the-future (BFF) algorithm was intro- duced in (Zhu et al., 2020) to address this issue for policy evaluation. The main idea is to borrow extra randomness from the future to approximately re-sample the next state when the underlying dynamics of the problem are sufficiently smooth. This paper extends the BFF algorithm to action- value function based model-free control. We prove that BFF is close to unbiased SGD when the underlying dynamics vary slowly with respect to actions. We confirm our theoretical findings with numerical simulations.
APA
Zhu, Y., Izzo, Z. & Ying, L.. (2022). Borrowing From the Future: Addressing Double Sampling in Model-free Control. Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 145:1099-1136 Available from https://proceedings.mlr.press/v145/zhu22a.html.

Related Material