Making Deep Q-learning methods robust to time discretization

Corentin Tallec, Léonard Blier, Yann Ollivier
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6096-6104, 2019.

Abstract

Despite remarkable successes, Deep Reinforce- ment Learning (DRL) is not robust to hyperparam- eterization, implementation details, or small envi- ronment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time dis- cretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Determinis- tic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-tallec19a, title = {Making Deep Q-learning methods robust to time discretization}, author = {Tallec, Corentin and Blier, L{\'e}onard and Ollivier, Yann}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6096--6104}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/tallec19a/tallec19a.pdf}, url = {https://proceedings.mlr.press/v97/tallec19a.html}, abstract = {Despite remarkable successes, Deep Reinforce- ment Learning (DRL) is not robust to hyperparam- eterization, implementation details, or small envi- ronment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time dis- cretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Determinis- tic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.} }
Endnote
%0 Conference Paper %T Making Deep Q-learning methods robust to time discretization %A Corentin Tallec %A Léonard Blier %A Yann Ollivier %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-tallec19a %I PMLR %P 6096--6104 %U https://proceedings.mlr.press/v97/tallec19a.html %V 97 %X Despite remarkable successes, Deep Reinforce- ment Learning (DRL) is not robust to hyperparam- eterization, implementation details, or small envi- ronment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time dis- cretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Determinis- tic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.
APA
Tallec, C., Blier, L. & Ollivier, Y.. (2019). Making Deep Q-learning methods robust to time discretization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6096-6104 Available from https://proceedings.mlr.press/v97/tallec19a.html.

Related Material