Language Representations for Generalization in Reinforcement Learning

Nikolaj Goodger, Peter Vamplew, Cameron Foale, Richard Dazeley
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:390-405, 2021.

Abstract

The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discrete-actions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language.

Cite this Paper


BibTeX
@InProceedings{pmlr-v157-goodger21a, title = {Language Representations for Generalization in Reinforcement Learning}, author = {Goodger, Nikolaj and Vamplew, Peter and Foale, Cameron and Dazeley, Richard}, booktitle = {Proceedings of The 13th Asian Conference on Machine Learning}, pages = {390--405}, year = {2021}, editor = {Balasubramanian, Vineeth N. and Tsang, Ivor}, volume = {157}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v157/goodger21a/goodger21a.pdf}, url = {https://proceedings.mlr.press/v157/goodger21a.html}, abstract = {The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discrete-actions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language.} }
Endnote
%0 Conference Paper %T Language Representations for Generalization in Reinforcement Learning %A Nikolaj Goodger %A Peter Vamplew %A Cameron Foale %A Richard Dazeley %B Proceedings of The 13th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Vineeth N. Balasubramanian %E Ivor Tsang %F pmlr-v157-goodger21a %I PMLR %P 390--405 %U https://proceedings.mlr.press/v157/goodger21a.html %V 157 %X The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discrete-actions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language.
APA
Goodger, N., Vamplew, P., Foale, C. & Dazeley, R.. (2021). Language Representations for Generalization in Reinforcement Learning. Proceedings of The 13th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 157:390-405 Available from https://proceedings.mlr.press/v157/goodger21a.html.

Related Material