Human-Timescale Adaptation in an Open-Ended Task Space

Jakob Bauer, Kate Baumli, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, Vibhavari Dasagi, Lucy Gonzalez, Karol Gregor, Edward Hughes, Sheleem Kashem, Maria Loks-Thompson, Hannah Openshaw, Jack Parker-Holder, Shreya Pathak, Nicolas Perez-Nieves, Nemanja Rakicevic, Tim Rocktäschel, Yannick Schroecker, Satinder Singh, Jakub Sygnowski, Karl Tuyls, Sarah York, Alexander Zacherl, Lei M Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:1887-1935, 2023.

Abstract

Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent’s capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-bauer23a, title = {Human-Timescale Adaptation in an Open-Ended Task Space}, author = {Bauer, Jakob and Baumli, Kate and Behbahani, Feryal and Bhoopchand, Avishkar and Bradley-Schmieg, Nathalie and Chang, Michael and Clay, Natalie and Collister, Adrian and Dasagi, Vibhavari and Gonzalez, Lucy and Gregor, Karol and Hughes, Edward and Kashem, Sheleem and Loks-Thompson, Maria and Openshaw, Hannah and Parker-Holder, Jack and Pathak, Shreya and Perez-Nieves, Nicolas and Rakicevic, Nemanja and Rockt\"{a}schel, Tim and Schroecker, Yannick and Singh, Satinder and Sygnowski, Jakub and Tuyls, Karl and York, Sarah and Zacherl, Alexander and Zhang, Lei M}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {1887--1935}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/bauer23a/bauer23a.pdf}, url = {https://proceedings.mlr.press/v202/bauer23a.html}, abstract = {Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent’s capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.} }
Endnote
%0 Conference Paper %T Human-Timescale Adaptation in an Open-Ended Task Space %A Jakob Bauer %A Kate Baumli %A Feryal Behbahani %A Avishkar Bhoopchand %A Nathalie Bradley-Schmieg %A Michael Chang %A Natalie Clay %A Adrian Collister %A Vibhavari Dasagi %A Lucy Gonzalez %A Karol Gregor %A Edward Hughes %A Sheleem Kashem %A Maria Loks-Thompson %A Hannah Openshaw %A Jack Parker-Holder %A Shreya Pathak %A Nicolas Perez-Nieves %A Nemanja Rakicevic %A Tim Rocktäschel %A Yannick Schroecker %A Satinder Singh %A Jakub Sygnowski %A Karl Tuyls %A Sarah York %A Alexander Zacherl %A Lei M Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-bauer23a %I PMLR %P 1887--1935 %U https://proceedings.mlr.press/v202/bauer23a.html %V 202 %X Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent’s capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.
APA
Bauer, J., Baumli, K., Behbahani, F., Bhoopchand, A., Bradley-Schmieg, N., Chang, M., Clay, N., Collister, A., Dasagi, V., Gonzalez, L., Gregor, K., Hughes, E., Kashem, S., Loks-Thompson, M., Openshaw, H., Parker-Holder, J., Pathak, S., Perez-Nieves, N., Rakicevic, N., Rocktäschel, T., Schroecker, Y., Singh, S., Sygnowski, J., Tuyls, K., York, S., Zacherl, A. & Zhang, L.M.. (2023). Human-Timescale Adaptation in an Open-Ended Task Space. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:1887-1935 Available from https://proceedings.mlr.press/v202/bauer23a.html.

Related Material