Borrowing From the Future: An Attempt to Address Double Sampling

Yuhua Zhu, Lexing Ying
Proceedings of The First Mathematical and Scientific Machine Learning Conference, PMLR 107:246-268, 2020.

Abstract

For model-free reinforcement learning, one of the main challenges of stochastic Bellman residual minimization is the double sampling problem, i.e., while only one single sample for the next state is available in the model-free setting, two independent samples for the next state are required in order to perform unbiased stochastic gradient descent. We propose new algorithms for addressing this problem based on the idea of borrowing extra randomness from the future. When the transition kernel varies slowly with respect to the state, it is shown that the training trajectory of new algorithms is close to the one of unbiased stochastic gradient descent. Numerical results for policy evaluation in both tabular and neural network settings are provided to confirm the theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v107-zhu20a, title = {Borrowing From the Future: {A}n Attempt to Address Double Sampling}, author = {Zhu, Yuhua and Ying, Lexing}, booktitle = {Proceedings of The First Mathematical and Scientific Machine Learning Conference}, pages = {246--268}, year = {2020}, editor = {Lu, Jianfeng and Ward, Rachel}, volume = {107}, series = {Proceedings of Machine Learning Research}, month = {20--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v107/zhu20a/zhu20a.pdf}, url = {https://proceedings.mlr.press/v107/zhu20a.html}, abstract = {For model-free reinforcement learning, one of the main challenges of stochastic Bellman residual minimization is the double sampling problem, i.e., while only one single sample for the next state is available in the model-free setting, two independent samples for the next state are required in order to perform unbiased stochastic gradient descent. We propose new algorithms for addressing this problem based on the idea of borrowing extra randomness from the future. When the transition kernel varies slowly with respect to the state, it is shown that the training trajectory of new algorithms is close to the one of unbiased stochastic gradient descent. Numerical results for policy evaluation in both tabular and neural network settings are provided to confirm the theoretical findings.} }
Endnote
%0 Conference Paper %T Borrowing From the Future: An Attempt to Address Double Sampling %A Yuhua Zhu %A Lexing Ying %B Proceedings of The First Mathematical and Scientific Machine Learning Conference %C Proceedings of Machine Learning Research %D 2020 %E Jianfeng Lu %E Rachel Ward %F pmlr-v107-zhu20a %I PMLR %P 246--268 %U https://proceedings.mlr.press/v107/zhu20a.html %V 107 %X For model-free reinforcement learning, one of the main challenges of stochastic Bellman residual minimization is the double sampling problem, i.e., while only one single sample for the next state is available in the model-free setting, two independent samples for the next state are required in order to perform unbiased stochastic gradient descent. We propose new algorithms for addressing this problem based on the idea of borrowing extra randomness from the future. When the transition kernel varies slowly with respect to the state, it is shown that the training trajectory of new algorithms is close to the one of unbiased stochastic gradient descent. Numerical results for policy evaluation in both tabular and neural network settings are provided to confirm the theoretical findings.
APA
Zhu, Y. & Ying, L.. (2020). Borrowing From the Future: An Attempt to Address Double Sampling. Proceedings of The First Mathematical and Scientific Machine Learning Conference, in Proceedings of Machine Learning Research 107:246-268 Available from https://proceedings.mlr.press/v107/zhu20a.html.

Related Material