Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2

Zachary Alan Daniels, Aswin Raghavan, Jesse Hostetler, Abrar Rahman, Indranil Sur, Michael Piacentino, Ajay Divakaran, Roberto Corizzo, Kamil Faber, Nathalie Japkowicz, Michael Baron, James Smith, Sahana Pramod Joshi, Zsolt Kira, Cameron Ethan Taylor, Mustafa Burak Gurbuz, Constantine Dovrolis, Tyler L. Hayes, Christopher Kanan, Jhair Gallardo
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:1120-1145, 2022.

Abstract

One approach to meet the challenges of deep lifelong reinforcement learning (LRL) is careful management of the agent’s learning experiences, in order to learn (without forgetting) and build internal meta-models (of the tasks, environments, agents, and world). Generative replay (GR) is a biologically-inspired replay mechanism that augments learning experiences with self-labelled examples drawn from an internal generative model that is updated over time. In this paper, we present a version of GR for LRL that satisfies two desiderata: (a) Introspective density modelling of the latent representations of policies learned using deep RL, and (b) Model-free end-to-end learning. The first property avoids the challenges of density modelling of complex high-dimensional perceptual inputs, whereas policy learning using deep RL works well with such perceptual inputs. The second property avoids the challenges of learning temporal dynamics and reward functions from few learning experiences with sparse rewards. In this work, we study three deep learning architectures for model-free GR, starting from a naive GR and adding ingredients to achieve (a) and (b). We evaluate our proposed algorithms on three different scenarios comprising tasks from the StarCraft2 and Minigrid domains. We report several key findings showing the impact of the design choices on quantitative metrics that include transfer learning, generalization to unseen tasks, fast adaptation after task change, performance comparable to a task expert, and minimizing catastrophic forgetting. We observe that our GR prevents drift in the features-to-action mapping from the latent vector space of a deep actor-critic agent. We also show improvements in established lifelong learning metrics. We find that the introduction of a small random replay buffer is needed to significantly increase the stability of training, when used in conjunction with the replay buffer and the generated replay buffer. Overall, we find that hidden replay (a well-known architecture for class-incremental classification) is the most promising approach that pushes the state-of-the-art in GR for LRL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-daniels22a, title = {Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2}, author = {Daniels, Zachary Alan and Raghavan, Aswin and Hostetler, Jesse and Rahman, Abrar and Sur, Indranil and Piacentino, Michael and Divakaran, Ajay and Corizzo, Roberto and Faber, Kamil and Japkowicz, Nathalie and Baron, Michael and Smith, James and Joshi, Sahana Pramod and Kira, Zsolt and Taylor, Cameron Ethan and Gurbuz, Mustafa Burak and Dovrolis, Constantine and Hayes, Tyler L. and Kanan, Christopher and Gallardo, Jhair}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {1120--1145}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/daniels22a/daniels22a.pdf}, url = {https://proceedings.mlr.press/v199/daniels22a.html}, abstract = {One approach to meet the challenges of deep lifelong reinforcement learning (LRL) is careful management of the agent’s learning experiences, in order to learn (without forgetting) and build internal meta-models (of the tasks, environments, agents, and world). Generative replay (GR) is a biologically-inspired replay mechanism that augments learning experiences with self-labelled examples drawn from an internal generative model that is updated over time. In this paper, we present a version of GR for LRL that satisfies two desiderata: (a) Introspective density modelling of the latent representations of policies learned using deep RL, and (b) Model-free end-to-end learning. The first property avoids the challenges of density modelling of complex high-dimensional perceptual inputs, whereas policy learning using deep RL works well with such perceptual inputs. The second property avoids the challenges of learning temporal dynamics and reward functions from few learning experiences with sparse rewards. In this work, we study three deep learning architectures for model-free GR, starting from a naive GR and adding ingredients to achieve (a) and (b). We evaluate our proposed algorithms on three different scenarios comprising tasks from the StarCraft2 and Minigrid domains. We report several key findings showing the impact of the design choices on quantitative metrics that include transfer learning, generalization to unseen tasks, fast adaptation after task change, performance comparable to a task expert, and minimizing catastrophic forgetting. We observe that our GR prevents drift in the features-to-action mapping from the latent vector space of a deep actor-critic agent. We also show improvements in established lifelong learning metrics. We find that the introduction of a small random replay buffer is needed to significantly increase the stability of training, when used in conjunction with the replay buffer and the generated replay buffer. Overall, we find that hidden replay (a well-known architecture for class-incremental classification) is the most promising approach that pushes the state-of-the-art in GR for LRL.} }
Endnote
%0 Conference Paper %T Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2 %A Zachary Alan Daniels %A Aswin Raghavan %A Jesse Hostetler %A Abrar Rahman %A Indranil Sur %A Michael Piacentino %A Ajay Divakaran %A Roberto Corizzo %A Kamil Faber %A Nathalie Japkowicz %A Michael Baron %A James Smith %A Sahana Pramod Joshi %A Zsolt Kira %A Cameron Ethan Taylor %A Mustafa Burak Gurbuz %A Constantine Dovrolis %A Tyler L. Hayes %A Christopher Kanan %A Jhair Gallardo %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-daniels22a %I PMLR %P 1120--1145 %U https://proceedings.mlr.press/v199/daniels22a.html %V 199 %X One approach to meet the challenges of deep lifelong reinforcement learning (LRL) is careful management of the agent’s learning experiences, in order to learn (without forgetting) and build internal meta-models (of the tasks, environments, agents, and world). Generative replay (GR) is a biologically-inspired replay mechanism that augments learning experiences with self-labelled examples drawn from an internal generative model that is updated over time. In this paper, we present a version of GR for LRL that satisfies two desiderata: (a) Introspective density modelling of the latent representations of policies learned using deep RL, and (b) Model-free end-to-end learning. The first property avoids the challenges of density modelling of complex high-dimensional perceptual inputs, whereas policy learning using deep RL works well with such perceptual inputs. The second property avoids the challenges of learning temporal dynamics and reward functions from few learning experiences with sparse rewards. In this work, we study three deep learning architectures for model-free GR, starting from a naive GR and adding ingredients to achieve (a) and (b). We evaluate our proposed algorithms on three different scenarios comprising tasks from the StarCraft2 and Minigrid domains. We report several key findings showing the impact of the design choices on quantitative metrics that include transfer learning, generalization to unseen tasks, fast adaptation after task change, performance comparable to a task expert, and minimizing catastrophic forgetting. We observe that our GR prevents drift in the features-to-action mapping from the latent vector space of a deep actor-critic agent. We also show improvements in established lifelong learning metrics. We find that the introduction of a small random replay buffer is needed to significantly increase the stability of training, when used in conjunction with the replay buffer and the generated replay buffer. Overall, we find that hidden replay (a well-known architecture for class-incremental classification) is the most promising approach that pushes the state-of-the-art in GR for LRL.
APA
Daniels, Z.A., Raghavan, A., Hostetler, J., Rahman, A., Sur, I., Piacentino, M., Divakaran, A., Corizzo, R., Faber, K., Japkowicz, N., Baron, M., Smith, J., Joshi, S.P., Kira, Z., Taylor, C.E., Gurbuz, M.B., Dovrolis, C., Hayes, T.L., Kanan, C. & Gallardo, J.. (2022). Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:1120-1145 Available from https://proceedings.mlr.press/v199/daniels22a.html.

Related Material