Diagnosing Bottlenecks in Deep Q-learning Algorithms

Justin Fu, Aviral Kumar, Matthew Soh, Sergey Levine
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2021-2030, 2019.

Abstract

Q-learning methods are a common class of algorithms used in reinforcement learning (RL). However, their behavior with function approximation, especially with neural networks, is poorly understood theoretically and empirically. In this work, we aim to experimentally investigate potential issues in Q-learning, by means of a "unit testing" framework where we can utilize oracles to disentangle sources of error. Specifically, we investigate questions related to function approximation, sampling error and nonstationarity, and where available, verify if trends found in oracle settings hold true with deep RL methods. We find that large neural network architectures have many benefits with regards to learning stability; offer several practical compensations for overfitting; and develop a novel sampling method based on explicitly compensating for function approximation error that yields fair improvement on high-dimensional continuous control domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-fu19a, title = {Diagnosing Bottlenecks in Deep Q-learning Algorithms}, author = {Fu, Justin and Kumar, Aviral and Soh, Matthew and Levine, Sergey}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2021--2030}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/fu19a/fu19a.pdf}, url = {https://proceedings.mlr.press/v97/fu19a.html}, abstract = {Q-learning methods are a common class of algorithms used in reinforcement learning (RL). However, their behavior with function approximation, especially with neural networks, is poorly understood theoretically and empirically. In this work, we aim to experimentally investigate potential issues in Q-learning, by means of a "unit testing" framework where we can utilize oracles to disentangle sources of error. Specifically, we investigate questions related to function approximation, sampling error and nonstationarity, and where available, verify if trends found in oracle settings hold true with deep RL methods. We find that large neural network architectures have many benefits with regards to learning stability; offer several practical compensations for overfitting; and develop a novel sampling method based on explicitly compensating for function approximation error that yields fair improvement on high-dimensional continuous control domains.} }
Endnote
%0 Conference Paper %T Diagnosing Bottlenecks in Deep Q-learning Algorithms %A Justin Fu %A Aviral Kumar %A Matthew Soh %A Sergey Levine %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-fu19a %I PMLR %P 2021--2030 %U https://proceedings.mlr.press/v97/fu19a.html %V 97 %X Q-learning methods are a common class of algorithms used in reinforcement learning (RL). However, their behavior with function approximation, especially with neural networks, is poorly understood theoretically and empirically. In this work, we aim to experimentally investigate potential issues in Q-learning, by means of a "unit testing" framework where we can utilize oracles to disentangle sources of error. Specifically, we investigate questions related to function approximation, sampling error and nonstationarity, and where available, verify if trends found in oracle settings hold true with deep RL methods. We find that large neural network architectures have many benefits with regards to learning stability; offer several practical compensations for overfitting; and develop a novel sampling method based on explicitly compensating for function approximation error that yields fair improvement on high-dimensional continuous control domains.
APA
Fu, J., Kumar, A., Soh, M. & Levine, S.. (2019). Diagnosing Bottlenecks in Deep Q-learning Algorithms. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2021-2030 Available from https://proceedings.mlr.press/v97/fu19a.html.

Related Material