Visualizing and Understanding Atari Agents

Samuel Greydanus, Anurag Koul, Jonathan Dodge, Alan Fern
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1792-1801, 2018.

Abstract

While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-greydanus18a, title = {Visualizing and Understanding {A}tari Agents}, author = {Greydanus, Samuel and Koul, Anurag and Dodge, Jonathan and Fern, Alan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1792--1801}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/greydanus18a/greydanus18a.pdf}, url = {https://proceedings.mlr.press/v80/greydanus18a.html}, abstract = {While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.} }
Endnote
%0 Conference Paper %T Visualizing and Understanding Atari Agents %A Samuel Greydanus %A Anurag Koul %A Jonathan Dodge %A Alan Fern %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-greydanus18a %I PMLR %P 1792--1801 %U https://proceedings.mlr.press/v80/greydanus18a.html %V 80 %X While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.
APA
Greydanus, S., Koul, A., Dodge, J. & Fern, A.. (2018). Visualizing and Understanding Atari Agents. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1792-1801 Available from https://proceedings.mlr.press/v80/greydanus18a.html.

Related Material