Latent Plans for Task-Agnostic Offline Reinforcement Learning

Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1838-1849, 2023.

Abstract

Everyday tasks of long-horizon and comprising a sequence of multiple implicit subtasks still impose a major challenge in offline robot control. While a number of prior methods aimed to address this setting with variants of imitation and offline reinforcement learning, the learned behavior is typically narrow and often struggles to reach configurable long-horizon goals. As both paradigms have complementary strengths and weaknesses, we propose a novel hierarchical approach that combines the strengths of both methods to learn task-agnostic long-horizon policies from high-dimensional camera observations. Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors. Experiments in various simulated and real robot control tasks show that our formulation enables producing previously unseen combinations of skills to reach temporally extended goals by “stitching” together latent skills through goal chaining with an order-of-magnitude improvement in performance upon state-of-the-art baselines. We even learn one multi-task visuomotor policy for 25 distinct manipulation tasks in the real world which outperforms both imitation learning and offline reinforcement learning techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-rosete-beas23a, title = {Latent Plans for Task-Agnostic Offline Reinforcement Learning}, author = {Rosete-Beas, Erick and Mees, Oier and Kalweit, Gabriel and Boedecker, Joschka and Burgard, Wolfram}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1838--1849}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/rosete-beas23a/rosete-beas23a.pdf}, url = {https://proceedings.mlr.press/v205/rosete-beas23a.html}, abstract = {Everyday tasks of long-horizon and comprising a sequence of multiple implicit subtasks still impose a major challenge in offline robot control. While a number of prior methods aimed to address this setting with variants of imitation and offline reinforcement learning, the learned behavior is typically narrow and often struggles to reach configurable long-horizon goals. As both paradigms have complementary strengths and weaknesses, we propose a novel hierarchical approach that combines the strengths of both methods to learn task-agnostic long-horizon policies from high-dimensional camera observations. Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors. Experiments in various simulated and real robot control tasks show that our formulation enables producing previously unseen combinations of skills to reach temporally extended goals by “stitching” together latent skills through goal chaining with an order-of-magnitude improvement in performance upon state-of-the-art baselines. We even learn one multi-task visuomotor policy for 25 distinct manipulation tasks in the real world which outperforms both imitation learning and offline reinforcement learning techniques.} }
Endnote
%0 Conference Paper %T Latent Plans for Task-Agnostic Offline Reinforcement Learning %A Erick Rosete-Beas %A Oier Mees %A Gabriel Kalweit %A Joschka Boedecker %A Wolfram Burgard %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-rosete-beas23a %I PMLR %P 1838--1849 %U https://proceedings.mlr.press/v205/rosete-beas23a.html %V 205 %X Everyday tasks of long-horizon and comprising a sequence of multiple implicit subtasks still impose a major challenge in offline robot control. While a number of prior methods aimed to address this setting with variants of imitation and offline reinforcement learning, the learned behavior is typically narrow and often struggles to reach configurable long-horizon goals. As both paradigms have complementary strengths and weaknesses, we propose a novel hierarchical approach that combines the strengths of both methods to learn task-agnostic long-horizon policies from high-dimensional camera observations. Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors. Experiments in various simulated and real robot control tasks show that our formulation enables producing previously unseen combinations of skills to reach temporally extended goals by “stitching” together latent skills through goal chaining with an order-of-magnitude improvement in performance upon state-of-the-art baselines. We even learn one multi-task visuomotor policy for 25 distinct manipulation tasks in the real world which outperforms both imitation learning and offline reinforcement learning techniques.
APA
Rosete-Beas, E., Mees, O., Kalweit, G., Boedecker, J. & Burgard, W.. (2023). Latent Plans for Task-Agnostic Offline Reinforcement Learning. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1838-1849 Available from https://proceedings.mlr.press/v205/rosete-beas23a.html.

Related Material