Policy and Value Transfer in Lifelong Reinforcement Learning


David Abel, Yuu Jinnai, Sophie Yue Guo, George Konidaris, Michael Littman ;
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:20-29, 2018.


We consider the problem of how best to use prior experience to bootstrap lifelong learning, where an agent faces a series of task instances drawn from some task distribution. First, we identify the initial policy that optimizes expected performance over the distribution of tasks for increasingly complex classes of policy and task distributions. We empirically demonstrate the relative performance of each policy class’ optimal element in a variety of simple task distributions. We then consider value-function initialization methods that preserve PAC guarantees while simultaneously minimizing the learning required in two learning algorithms, yielding MaxQInit, a practical new method for value-function-based transfer. We show that MaxQInit performs well in simple lifelong RL experiments.

Related Material