Quantifying Generalization in Reinforcement Learning

Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, John Schulman
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1282-1289, 2019.

Abstract

In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agent’s ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-cobbe19a, title = {Quantifying Generalization in Reinforcement Learning}, author = {Cobbe, Karl and Klimov, Oleg and Hesse, Chris and Kim, Taehoon and Schulman, John}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1282--1289}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/cobbe19a/cobbe19a.pdf}, url = {https://proceedings.mlr.press/v97/cobbe19a.html}, abstract = {In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agent’s ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.} }
Endnote
%0 Conference Paper %T Quantifying Generalization in Reinforcement Learning %A Karl Cobbe %A Oleg Klimov %A Chris Hesse %A Taehoon Kim %A John Schulman %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-cobbe19a %I PMLR %P 1282--1289 %U https://proceedings.mlr.press/v97/cobbe19a.html %V 97 %X In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agent’s ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.
APA
Cobbe, K., Klimov, O., Hesse, C., Kim, T. & Schulman, J.. (2019). Quantifying Generalization in Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1282-1289 Available from https://proceedings.mlr.press/v97/cobbe19a.html.

Related Material