Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation

Josiah Wong, Albert Tung, Andrey Kurenkov, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Roberto Martín-Martín
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1367-1378, 2022.

Abstract

In mobile manipulation (MM), robots can both navigate within and interact with their environment and are thus able to complete many more tasks than robots only capable of navigation or manipulation. In this work, we explore how to apply imitation learning (IL) to learn continuous visuo-motor policies for MM tasks. Much prior work has shown that IL can train visuo-motor policies for either manipulation or navigation domains, but few works have applied IL to the MM domain. Doing this is challenging for two reasons: on the data side, current interfaces make collecting high-quality human demonstrations difficult, and on the learning side, policies trained on limited data can suffer from covariate shift when deployed. To address these problems, we first propose Mobile Manipulation RoboTurk (MoMaRT), a novel teleoperation framework allowing simultaneous navigation and manipulation of mobile manipulators, and collect a first-of-its-kind large scale dataset in a realistic simulated kitchen setting. We then propose a learned error detection system to address the covariate shift by detecting when an agent is in a potential failure state. We train performant IL policies and error detectors from this data, and achieve over 45% task success rate and 85% error detection success rate across multiple multi-stage tasks when trained on expert data. Additional results and video at https://sites.google.com/view/il-for-mm/home.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-wong22a, title = {Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation}, author = {Wong, Josiah and Tung, Albert and Kurenkov, Andrey and Mandlekar, Ajay and Fei-Fei, Li and Savarese, Silvio and Mart\'in-Mart\'in, Roberto}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1367--1378}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/wong22a/wong22a.pdf}, url = {https://proceedings.mlr.press/v164/wong22a.html}, abstract = {In mobile manipulation (MM), robots can both navigate within and interact with their environment and are thus able to complete many more tasks than robots only capable of navigation or manipulation. In this work, we explore how to apply imitation learning (IL) to learn continuous visuo-motor policies for MM tasks. Much prior work has shown that IL can train visuo-motor policies for either manipulation or navigation domains, but few works have applied IL to the MM domain. Doing this is challenging for two reasons: on the data side, current interfaces make collecting high-quality human demonstrations difficult, and on the learning side, policies trained on limited data can suffer from covariate shift when deployed. To address these problems, we first propose Mobile Manipulation RoboTurk (MoMaRT), a novel teleoperation framework allowing simultaneous navigation and manipulation of mobile manipulators, and collect a first-of-its-kind large scale dataset in a realistic simulated kitchen setting. We then propose a learned error detection system to address the covariate shift by detecting when an agent is in a potential failure state. We train performant IL policies and error detectors from this data, and achieve over 45% task success rate and 85% error detection success rate across multiple multi-stage tasks when trained on expert data. Additional results and video at https://sites.google.com/view/il-for-mm/home.} }
Endnote
%0 Conference Paper %T Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation %A Josiah Wong %A Albert Tung %A Andrey Kurenkov %A Ajay Mandlekar %A Li Fei-Fei %A Silvio Savarese %A Roberto Martín-Martín %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-wong22a %I PMLR %P 1367--1378 %U https://proceedings.mlr.press/v164/wong22a.html %V 164 %X In mobile manipulation (MM), robots can both navigate within and interact with their environment and are thus able to complete many more tasks than robots only capable of navigation or manipulation. In this work, we explore how to apply imitation learning (IL) to learn continuous visuo-motor policies for MM tasks. Much prior work has shown that IL can train visuo-motor policies for either manipulation or navigation domains, but few works have applied IL to the MM domain. Doing this is challenging for two reasons: on the data side, current interfaces make collecting high-quality human demonstrations difficult, and on the learning side, policies trained on limited data can suffer from covariate shift when deployed. To address these problems, we first propose Mobile Manipulation RoboTurk (MoMaRT), a novel teleoperation framework allowing simultaneous navigation and manipulation of mobile manipulators, and collect a first-of-its-kind large scale dataset in a realistic simulated kitchen setting. We then propose a learned error detection system to address the covariate shift by detecting when an agent is in a potential failure state. We train performant IL policies and error detectors from this data, and achieve over 45% task success rate and 85% error detection success rate across multiple multi-stage tasks when trained on expert data. Additional results and video at https://sites.google.com/view/il-for-mm/home.
APA
Wong, J., Tung, A., Kurenkov, A., Mandlekar, A., Fei-Fei, L., Savarese, S. & Martín-Martín, R.. (2022). Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1367-1378 Available from https://proceedings.mlr.press/v164/wong22a.html.

Related Material