Growing Action Spaces

Gregory Farquhar, Laura Gustafson, Zeming Lin, Shimon Whiteson, Nicolas Usunier, Gabriel Synnaeve
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3040-3051, 2020.

Abstract

In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-farquhar20a, title = {Growing Action Spaces}, author = {Farquhar, Gregory and Gustafson, Laura and Lin, Zeming and Whiteson, Shimon and Usunier, Nicolas and Synnaeve, Gabriel}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3040--3051}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/farquhar20a/farquhar20a.pdf}, url = {https://proceedings.mlr.press/v119/farquhar20a.html}, abstract = {In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.} }
Endnote
%0 Conference Paper %T Growing Action Spaces %A Gregory Farquhar %A Laura Gustafson %A Zeming Lin %A Shimon Whiteson %A Nicolas Usunier %A Gabriel Synnaeve %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-farquhar20a %I PMLR %P 3040--3051 %U https://proceedings.mlr.press/v119/farquhar20a.html %V 119 %X In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.
APA
Farquhar, G., Gustafson, L., Lin, Z., Whiteson, S., Usunier, N. & Synnaeve, G.. (2020). Growing Action Spaces. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3040-3051 Available from https://proceedings.mlr.press/v119/farquhar20a.html.

Related Material