Learning and Retrieval from Prior Data for Skill-based Imitation Learning

Soroush Nasiriany, Tian Gao, Ajay Mandlekar, Yuke Zhu
Proceedings of The 6th Conference on Robot Learning, PMLR 205:2181-2204, 2023.

Abstract

Imitation learning offers a promising path for robots to learn general-purpose tasks, but traditionally has enjoyed limited scalability due to high data supervision requirements and brittle generalization. Inspired by recent work on skill-based imitation learning, we investigate whether leveraging prior data from previous related tasks can enable learning novel tasks in a more robust, data-efficient manner. To make effective use of the prior data, the agent must internalize knowledge from the prior data and contextualize this knowledge in novel tasks. To that end we propose a skill-based imitation learning framework that extracts temporally-extended sensorimotor skills from prior data and subsequently learns a policy for the target task with respect to these learned skills. We find a number of modeling choices significantly improve performance on novel tasks, namely representation learning objectives to enable more predictable and consistent skill representations and a retrieval-based data augmentation procedure to increase the scope of supervision for the policy. On a number of multi-task manipulation domains, we demonstrate that our method significantly outperforms existing imitation learning and offline reinforcement learning approaches. Videos and code are available at https://ut-austin-rpl.github.io/sailor

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-nasiriany23a, title = {Learning and Retrieval from Prior Data for Skill-based Imitation Learning}, author = {Nasiriany, Soroush and Gao, Tian and Mandlekar, Ajay and Zhu, Yuke}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {2181--2204}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/nasiriany23a/nasiriany23a.pdf}, url = {https://proceedings.mlr.press/v205/nasiriany23a.html}, abstract = {Imitation learning offers a promising path for robots to learn general-purpose tasks, but traditionally has enjoyed limited scalability due to high data supervision requirements and brittle generalization. Inspired by recent work on skill-based imitation learning, we investigate whether leveraging prior data from previous related tasks can enable learning novel tasks in a more robust, data-efficient manner. To make effective use of the prior data, the agent must internalize knowledge from the prior data and contextualize this knowledge in novel tasks. To that end we propose a skill-based imitation learning framework that extracts temporally-extended sensorimotor skills from prior data and subsequently learns a policy for the target task with respect to these learned skills. We find a number of modeling choices significantly improve performance on novel tasks, namely representation learning objectives to enable more predictable and consistent skill representations and a retrieval-based data augmentation procedure to increase the scope of supervision for the policy. On a number of multi-task manipulation domains, we demonstrate that our method significantly outperforms existing imitation learning and offline reinforcement learning approaches. Videos and code are available at https://ut-austin-rpl.github.io/sailor} }
Endnote
%0 Conference Paper %T Learning and Retrieval from Prior Data for Skill-based Imitation Learning %A Soroush Nasiriany %A Tian Gao %A Ajay Mandlekar %A Yuke Zhu %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-nasiriany23a %I PMLR %P 2181--2204 %U https://proceedings.mlr.press/v205/nasiriany23a.html %V 205 %X Imitation learning offers a promising path for robots to learn general-purpose tasks, but traditionally has enjoyed limited scalability due to high data supervision requirements and brittle generalization. Inspired by recent work on skill-based imitation learning, we investigate whether leveraging prior data from previous related tasks can enable learning novel tasks in a more robust, data-efficient manner. To make effective use of the prior data, the agent must internalize knowledge from the prior data and contextualize this knowledge in novel tasks. To that end we propose a skill-based imitation learning framework that extracts temporally-extended sensorimotor skills from prior data and subsequently learns a policy for the target task with respect to these learned skills. We find a number of modeling choices significantly improve performance on novel tasks, namely representation learning objectives to enable more predictable and consistent skill representations and a retrieval-based data augmentation procedure to increase the scope of supervision for the policy. On a number of multi-task manipulation domains, we demonstrate that our method significantly outperforms existing imitation learning and offline reinforcement learning approaches. Videos and code are available at https://ut-austin-rpl.github.io/sailor
APA
Nasiriany, S., Gao, T., Mandlekar, A. & Zhu, Y.. (2023). Learning and Retrieval from Prior Data for Skill-based Imitation Learning. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:2181-2204 Available from https://proceedings.mlr.press/v205/nasiriany23a.html.

Related Material