Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

Kate Rakelly, Aurick Zhou, Chelsea Finn, Sergey Levine, Deirdre Quillen
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5331-5340, 2019.

Abstract

Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While meta-reinforcement learning (meta-RL) algorithms can enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness on sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-rakelly19a, title = {Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables}, author = {Rakelly, Kate and Zhou, Aurick and Finn, Chelsea and Levine, Sergey and Quillen, Deirdre}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5331--5340}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/rakelly19a/rakelly19a.pdf}, url = {https://proceedings.mlr.press/v97/rakelly19a.html}, abstract = {Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While meta-reinforcement learning (meta-RL) algorithms can enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness on sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.} }
Endnote
%0 Conference Paper %T Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables %A Kate Rakelly %A Aurick Zhou %A Chelsea Finn %A Sergey Levine %A Deirdre Quillen %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-rakelly19a %I PMLR %P 5331--5340 %U https://proceedings.mlr.press/v97/rakelly19a.html %V 97 %X Deep reinforcement learning algorithms require large amounts of experience to learn an individual task. While meta-reinforcement learning (meta-RL) algorithms can enable agents to learn new skills from small amounts of experience, several major challenges preclude their practicality. Current methods rely heavily on on-policy experience, limiting their sample efficiency. They also lack mechanisms to reason about task uncertainty when adapting to new tasks, limiting their effectiveness on sparse reward problems. In this paper, we address these challenges by developing an off-policy meta-RL algorithm that disentangles task inference and control. In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience. This probabilistic interpretation enables posterior sampling for structured and efficient exploration. We demonstrate how to integrate these task variables with off-policy RL algorithms to achieve both meta-training and adaptation efficiency. Our method outperforms prior algorithms in sample efficiency by 20-100X as well as in asymptotic performance on several meta-RL benchmarks.
APA
Rakelly, K., Zhou, A., Finn, C., Levine, S. & Quillen, D.. (2019). Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5331-5340 Available from https://proceedings.mlr.press/v97/rakelly19a.html.

Related Material