Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following

Valts Blukis, Ross Knepper, Yoav Artzi
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1829-1854, 2021.

Abstract

We study the problem of learning a robot policy to follow natural language instructions that can be easily extended to reason about new objects. We introduce a few-shot language-conditioned object grounding method trained from augmented reality data that uses exemplars to identify objects and align them to their mentions in instructions. We present a learned map representation that encodes object locations and their instructed use, and construct it from our few-shot grounding output. We integrate this mapping approach into an instruction-following policy, thereby allowing it to reason about previously unseen objects at test-time by simply adding exemplars. We evaluate on the task of learning to map raw observations and instructions to continuous control of a physical quadcopter. Our approach significantly outperforms the prior state of the art in the presence of new objects, even when the prior approach observes all objects during training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-blukis21a, title = {Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following}, author = {Blukis, Valts and Knepper, Ross and Artzi, Yoav}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1829--1854}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/blukis21a/blukis21a.pdf}, url = {https://proceedings.mlr.press/v155/blukis21a.html}, abstract = {We study the problem of learning a robot policy to follow natural language instructions that can be easily extended to reason about new objects. We introduce a few-shot language-conditioned object grounding method trained from augmented reality data that uses exemplars to identify objects and align them to their mentions in instructions. We present a learned map representation that encodes object locations and their instructed use, and construct it from our few-shot grounding output. We integrate this mapping approach into an instruction-following policy, thereby allowing it to reason about previously unseen objects at test-time by simply adding exemplars. We evaluate on the task of learning to map raw observations and instructions to continuous control of a physical quadcopter. Our approach significantly outperforms the prior state of the art in the presence of new objects, even when the prior approach observes all objects during training.} }
Endnote
%0 Conference Paper %T Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following %A Valts Blukis %A Ross Knepper %A Yoav Artzi %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-blukis21a %I PMLR %P 1829--1854 %U https://proceedings.mlr.press/v155/blukis21a.html %V 155 %X We study the problem of learning a robot policy to follow natural language instructions that can be easily extended to reason about new objects. We introduce a few-shot language-conditioned object grounding method trained from augmented reality data that uses exemplars to identify objects and align them to their mentions in instructions. We present a learned map representation that encodes object locations and their instructed use, and construct it from our few-shot grounding output. We integrate this mapping approach into an instruction-following policy, thereby allowing it to reason about previously unseen objects at test-time by simply adding exemplars. We evaluate on the task of learning to map raw observations and instructions to continuous control of a physical quadcopter. Our approach significantly outperforms the prior state of the art in the presence of new objects, even when the prior approach observes all objects during training.
APA
Blukis, V., Knepper, R. & Artzi, Y.. (2021). Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1829-1854 Available from https://proceedings.mlr.press/v155/blukis21a.html.

Related Material