Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research

Johan Samir Obando Ceron, Pablo Samuel Castro
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1373-1383, 2021.

Abstract

Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community’s emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-ceron21a, title = {Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research}, author = {Ceron, Johan Samir Obando and Castro, Pablo Samuel}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1373--1383}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/ceron21a/ceron21a.pdf}, url = {https://proceedings.mlr.press/v139/ceron21a.html}, abstract = {Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community’s emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.} }
Endnote
%0 Conference Paper %T Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research %A Johan Samir Obando Ceron %A Pablo Samuel Castro %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-ceron21a %I PMLR %P 1373--1383 %U https://proceedings.mlr.press/v139/ceron21a.html %V 139 %X Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community’s emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.
APA
Ceron, J.S.O. & Castro, P.S.. (2021). Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1373-1383 Available from https://proceedings.mlr.press/v139/ceron21a.html.

Related Material