Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments

Ryan Sullivan, Jordan K Terry, Benjamin Black, John P Dickerson
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:20744-20776, 2022.

Abstract

Visualizing optimization landscapes has resulted in many fundamental insights in numeric optimization, specifically regarding novel improvements to optimization techniques. However, visualizations of the objective that reinforcement learning optimizes (the "reward surface") have only ever been generated for a small number of narrow contexts. This work presents reward surfaces and related visualizations of 27 of the most widely used reinforcement learning environments in Gym for the first time. We also explore reward surfaces in the policy gradient direction and show for the first time that many popular reinforcement learning environments have frequent "cliffs" (sudden large drops in expected reward). We demonstrate that A2C often "dives off" these cliffs into low reward regions of the parameter space while PPO avoids them, confirming a popular intuition for PPO’s improved performance over previous methods. We additionally introduce a highly extensible library that allows researchers to easily generate these visualizations in the future. Our findings provide new intuition to explain the successes and failures of modern RL methods, and our visualizations concretely characterize several failure modes of reinforcement learning agents in novel ways.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-sullivan22a, title = {Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments}, author = {Sullivan, Ryan and Terry, Jordan K and Black, Benjamin and Dickerson, John P}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {20744--20776}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/sullivan22a/sullivan22a.pdf}, url = {https://proceedings.mlr.press/v162/sullivan22a.html}, abstract = {Visualizing optimization landscapes has resulted in many fundamental insights in numeric optimization, specifically regarding novel improvements to optimization techniques. However, visualizations of the objective that reinforcement learning optimizes (the "reward surface") have only ever been generated for a small number of narrow contexts. This work presents reward surfaces and related visualizations of 27 of the most widely used reinforcement learning environments in Gym for the first time. We also explore reward surfaces in the policy gradient direction and show for the first time that many popular reinforcement learning environments have frequent "cliffs" (sudden large drops in expected reward). We demonstrate that A2C often "dives off" these cliffs into low reward regions of the parameter space while PPO avoids them, confirming a popular intuition for PPO’s improved performance over previous methods. We additionally introduce a highly extensible library that allows researchers to easily generate these visualizations in the future. Our findings provide new intuition to explain the successes and failures of modern RL methods, and our visualizations concretely characterize several failure modes of reinforcement learning agents in novel ways.} }
Endnote
%0 Conference Paper %T Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments %A Ryan Sullivan %A Jordan K Terry %A Benjamin Black %A John P Dickerson %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-sullivan22a %I PMLR %P 20744--20776 %U https://proceedings.mlr.press/v162/sullivan22a.html %V 162 %X Visualizing optimization landscapes has resulted in many fundamental insights in numeric optimization, specifically regarding novel improvements to optimization techniques. However, visualizations of the objective that reinforcement learning optimizes (the "reward surface") have only ever been generated for a small number of narrow contexts. This work presents reward surfaces and related visualizations of 27 of the most widely used reinforcement learning environments in Gym for the first time. We also explore reward surfaces in the policy gradient direction and show for the first time that many popular reinforcement learning environments have frequent "cliffs" (sudden large drops in expected reward). We demonstrate that A2C often "dives off" these cliffs into low reward regions of the parameter space while PPO avoids them, confirming a popular intuition for PPO’s improved performance over previous methods. We additionally introduce a highly extensible library that allows researchers to easily generate these visualizations in the future. Our findings provide new intuition to explain the successes and failures of modern RL methods, and our visualizations concretely characterize several failure modes of reinforcement learning agents in novel ways.
APA
Sullivan, R., Terry, J.K., Black, B. & Dickerson, J.P.. (2022). Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:20744-20776 Available from https://proceedings.mlr.press/v162/sullivan22a.html.

Related Material