Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills

Yevgen Chebotar, Karol Hausman, Yao Lu, Ted Xiao, Dmitry Kalashnikov, Jacob Varley, Alex Irpan, Benjamin Eysenbach, Ryan C Julian, Chelsea Finn, Sergey Levine
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1518-1528, 2021.

Abstract

We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data. In particular, we propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset. We employ goal-conditioned Q-learning with hindsight relabeling and develop several techniques that enable training in a particularly challenging offline setting. We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects. We also show that our method can learn to reach long-horizon goals across multiple episodes through goal chaining, and learn rich representations that can help with downstream tasks through pre-training or auxiliary objectives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-chebotar21a, title = {Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills}, author = {Chebotar, Yevgen and Hausman, Karol and Lu, Yao and Xiao, Ted and Kalashnikov, Dmitry and Varley, Jacob and Irpan, Alex and Eysenbach, Benjamin and Julian, Ryan C and Finn, Chelsea and Levine, Sergey}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1518--1528}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/chebotar21a/chebotar21a.pdf}, url = {https://proceedings.mlr.press/v139/chebotar21a.html}, abstract = {We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data. In particular, we propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset. We employ goal-conditioned Q-learning with hindsight relabeling and develop several techniques that enable training in a particularly challenging offline setting. We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects. We also show that our method can learn to reach long-horizon goals across multiple episodes through goal chaining, and learn rich representations that can help with downstream tasks through pre-training or auxiliary objectives.} }
Endnote
%0 Conference Paper %T Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills %A Yevgen Chebotar %A Karol Hausman %A Yao Lu %A Ted Xiao %A Dmitry Kalashnikov %A Jacob Varley %A Alex Irpan %A Benjamin Eysenbach %A Ryan C Julian %A Chelsea Finn %A Sergey Levine %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-chebotar21a %I PMLR %P 1518--1528 %U https://proceedings.mlr.press/v139/chebotar21a.html %V 139 %X We consider the problem of learning useful robotic skills from previously collected offline data without access to manually specified rewards or additional online exploration, a setting that is becoming increasingly important for scaling robot learning by reusing past robotic data. In particular, we propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset. We employ goal-conditioned Q-learning with hindsight relabeling and develop several techniques that enable training in a particularly challenging offline setting. We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects. We also show that our method can learn to reach long-horizon goals across multiple episodes through goal chaining, and learn rich representations that can help with downstream tasks through pre-training or auxiliary objectives.
APA
Chebotar, Y., Hausman, K., Lu, Y., Xiao, T., Kalashnikov, D., Varley, J., Irpan, A., Eysenbach, B., Julian, R.C., Finn, C. & Levine, S.. (2021). Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1518-1528 Available from https://proceedings.mlr.press/v139/chebotar21a.html.

Related Material