[edit]
Making Deep Q-learning methods robust to time discretization
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6096-6104, 2019.
Abstract
Despite remarkable successes, Deep Reinforce- ment Learning (DRL) is not robust to hyperparam- eterization, implementation details, or small envi- ronment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time dis- cretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Determinis- tic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.