Composing Value Functions in Reinforcement Learning

[edit]

Benjamin Van Niekerk, Steven James, Adam Earle, Benjamin Rosman ;
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6401-6409, 2019.

Abstract

An important property for lifelong-learning agents is the ability to combine existing skills to solve new unseen tasks. In general, however, it is unclear how to compose existing skills in a principled manner. Under the assumption of deterministic dynamics, we prove that optimal value function composition can be achieved in entropy-regularised reinforcement learning (RL), and extend this result to the standard RL setting. Composition is demonstrated in a high-dimensional video game, where an agent with an existing library of skills is immediately able to solve new tasks without the need for further learning.

Related Material