[edit]
Language Representations for Generalization in Reinforcement Learning
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:390-405, 2021.
Abstract
The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discrete-actions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language.