Working Memory Graphs

Ricky Loynd, Roland Fernandez, Asli Celikyilmaz, Adith Swaminathan, Matthew Hausknecht
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6404-6414, 2020.

Abstract

Transformers have increasingly outperformed gated RNNs in obtaining new state-of-the-art results on supervised tasks involving text sequences. Inspired by this trend, we study the question of how Transformer-based models can improve the performance of sequential decision-making agents. We present the Working Memory Graph (WMG), an agent that employs multi-head self-attention to reason over a dynamic set of vectors representing observed and recurrent state. We evaluate WMG in three environments featuring factored observation spaces: a Pathfinding environment that requires complex reasoning over past observations, BabyAI gridworld levels that involve variable goals, and Sokoban which emphasizes future planning. We find that the combination of WMG’s Transformer-based architecture with factored observation spaces leads to significant gains in learning efficiency compared to baseline architectures across all tasks. WMG demonstrates how Transformer-based models can dramatically boost sample efficiency in RL environments for which observations can be factored.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-loynd20a, title = {Working Memory Graphs}, author = {Loynd, Ricky and Fernandez, Roland and Celikyilmaz, Asli and Swaminathan, Adith and Hausknecht, Matthew}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6404--6414}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/loynd20a/loynd20a.pdf}, url = {https://proceedings.mlr.press/v119/loynd20a.html}, abstract = {Transformers have increasingly outperformed gated RNNs in obtaining new state-of-the-art results on supervised tasks involving text sequences. Inspired by this trend, we study the question of how Transformer-based models can improve the performance of sequential decision-making agents. We present the Working Memory Graph (WMG), an agent that employs multi-head self-attention to reason over a dynamic set of vectors representing observed and recurrent state. We evaluate WMG in three environments featuring factored observation spaces: a Pathfinding environment that requires complex reasoning over past observations, BabyAI gridworld levels that involve variable goals, and Sokoban which emphasizes future planning. We find that the combination of WMG’s Transformer-based architecture with factored observation spaces leads to significant gains in learning efficiency compared to baseline architectures across all tasks. WMG demonstrates how Transformer-based models can dramatically boost sample efficiency in RL environments for which observations can be factored.} }
Endnote
%0 Conference Paper %T Working Memory Graphs %A Ricky Loynd %A Roland Fernandez %A Asli Celikyilmaz %A Adith Swaminathan %A Matthew Hausknecht %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-loynd20a %I PMLR %P 6404--6414 %U https://proceedings.mlr.press/v119/loynd20a.html %V 119 %X Transformers have increasingly outperformed gated RNNs in obtaining new state-of-the-art results on supervised tasks involving text sequences. Inspired by this trend, we study the question of how Transformer-based models can improve the performance of sequential decision-making agents. We present the Working Memory Graph (WMG), an agent that employs multi-head self-attention to reason over a dynamic set of vectors representing observed and recurrent state. We evaluate WMG in three environments featuring factored observation spaces: a Pathfinding environment that requires complex reasoning over past observations, BabyAI gridworld levels that involve variable goals, and Sokoban which emphasizes future planning. We find that the combination of WMG’s Transformer-based architecture with factored observation spaces leads to significant gains in learning efficiency compared to baseline architectures across all tasks. WMG demonstrates how Transformer-based models can dramatically boost sample efficiency in RL environments for which observations can be factored.
APA
Loynd, R., Fernandez, R., Celikyilmaz, A., Swaminathan, A. & Hausknecht, M.. (2020). Working Memory Graphs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6404-6414 Available from https://proceedings.mlr.press/v119/loynd20a.html.

Related Material