LanguageRefer: Spatial-Language Model for 3D Visual Grounding

Junha Roh, Karthik Desingh, Ali Farhadi, Dieter Fox
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1046-1056, 2022.

Abstract

For robots to understand human instructions and perform meaningful tasks in the near future, it is important to develop learned models that comprehend referential language to identify common objects in real-world 3D scenes. In this paper, we introduce a spatial-language model for a 3D visual grounding problem. Specifically, given a reconstructed 3D scene in the form of point clouds with 3D bounding boxes of potential object candidates, and a language utterance referring to a target object in the scene, our model successfully identifies the target object from a set of potential candidates. Specifically, LanguageRefer uses a transformer-based architecture that combines spatial embedding from bounding boxes with fine-tuned language embeddings from DistilBert to predict the target object. We show that it performs competitively on visio-linguistic datasets proposed by ReferIt3D. Further, we analyze its spatial reasoning task performance decoupled from perception noise, the accuracy of view-dependent utterances, and viewpoint annotations for potential robotics applications. Project website: https://sites.google.com/view/language-refer.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-roh22a, title = {LanguageRefer: Spatial-Language Model for 3D Visual Grounding}, author = {Roh, Junha and Desingh, Karthik and Farhadi, Ali and Fox, Dieter}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {1046--1056}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/roh22a/roh22a.pdf}, url = {https://proceedings.mlr.press/v164/roh22a.html}, abstract = {For robots to understand human instructions and perform meaningful tasks in the near future, it is important to develop learned models that comprehend referential language to identify common objects in real-world 3D scenes. In this paper, we introduce a spatial-language model for a 3D visual grounding problem. Specifically, given a reconstructed 3D scene in the form of point clouds with 3D bounding boxes of potential object candidates, and a language utterance referring to a target object in the scene, our model successfully identifies the target object from a set of potential candidates. Specifically, LanguageRefer uses a transformer-based architecture that combines spatial embedding from bounding boxes with fine-tuned language embeddings from DistilBert to predict the target object. We show that it performs competitively on visio-linguistic datasets proposed by ReferIt3D. Further, we analyze its spatial reasoning task performance decoupled from perception noise, the accuracy of view-dependent utterances, and viewpoint annotations for potential robotics applications. Project website: https://sites.google.com/view/language-refer.} }
Endnote
%0 Conference Paper %T LanguageRefer: Spatial-Language Model for 3D Visual Grounding %A Junha Roh %A Karthik Desingh %A Ali Farhadi %A Dieter Fox %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-roh22a %I PMLR %P 1046--1056 %U https://proceedings.mlr.press/v164/roh22a.html %V 164 %X For robots to understand human instructions and perform meaningful tasks in the near future, it is important to develop learned models that comprehend referential language to identify common objects in real-world 3D scenes. In this paper, we introduce a spatial-language model for a 3D visual grounding problem. Specifically, given a reconstructed 3D scene in the form of point clouds with 3D bounding boxes of potential object candidates, and a language utterance referring to a target object in the scene, our model successfully identifies the target object from a set of potential candidates. Specifically, LanguageRefer uses a transformer-based architecture that combines spatial embedding from bounding boxes with fine-tuned language embeddings from DistilBert to predict the target object. We show that it performs competitively on visio-linguistic datasets proposed by ReferIt3D. Further, we analyze its spatial reasoning task performance decoupled from perception noise, the accuracy of view-dependent utterances, and viewpoint annotations for potential robotics applications. Project website: https://sites.google.com/view/language-refer.
APA
Roh, J., Desingh, K., Farhadi, A. & Fox, D.. (2022). LanguageRefer: Spatial-Language Model for 3D Visual Grounding. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:1046-1056 Available from https://proceedings.mlr.press/v164/roh22a.html.

Related Material