OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data

Shiyang Lu, Haonan Chang, Eric Pu Jing, Abdeslam Boularias, Kostas Bekris
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1610-1620, 2023.

Abstract

This work presents OVIR-3D, a straightforward yet effective method for open-vocabulary 3D object instance retrieval without using any 3D data for training. Given a language query, the proposed method is able to return a ranked set of 3D object instance segments based on the feature similarity of the instance and the text query. This is achieved by a multi-view fusion of text-aligned 2D region proposals into 3D space, where the 2D region proposal network could leverage 2D datasets, which are more accessible and typically larger than 3D datasets. The proposed fusion process is efficient as it can be performed in real-time for most indoor 3D scenes and does not require additional training in 3D space. Experiments on public datasets and a real robot show the effectiveness of the method and its potential for applications in robot navigation and manipulation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-lu23a, title = {OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data}, author = {Lu, Shiyang and Chang, Haonan and Jing, Eric Pu and Boularias, Abdeslam and Bekris, Kostas}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1610--1620}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/lu23a/lu23a.pdf}, url = {https://proceedings.mlr.press/v229/lu23a.html}, abstract = {This work presents OVIR-3D, a straightforward yet effective method for open-vocabulary 3D object instance retrieval without using any 3D data for training. Given a language query, the proposed method is able to return a ranked set of 3D object instance segments based on the feature similarity of the instance and the text query. This is achieved by a multi-view fusion of text-aligned 2D region proposals into 3D space, where the 2D region proposal network could leverage 2D datasets, which are more accessible and typically larger than 3D datasets. The proposed fusion process is efficient as it can be performed in real-time for most indoor 3D scenes and does not require additional training in 3D space. Experiments on public datasets and a real robot show the effectiveness of the method and its potential for applications in robot navigation and manipulation.} }
Endnote
%0 Conference Paper %T OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data %A Shiyang Lu %A Haonan Chang %A Eric Pu Jing %A Abdeslam Boularias %A Kostas Bekris %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-lu23a %I PMLR %P 1610--1620 %U https://proceedings.mlr.press/v229/lu23a.html %V 229 %X This work presents OVIR-3D, a straightforward yet effective method for open-vocabulary 3D object instance retrieval without using any 3D data for training. Given a language query, the proposed method is able to return a ranked set of 3D object instance segments based on the feature similarity of the instance and the text query. This is achieved by a multi-view fusion of text-aligned 2D region proposals into 3D space, where the 2D region proposal network could leverage 2D datasets, which are more accessible and typically larger than 3D datasets. The proposed fusion process is efficient as it can be performed in real-time for most indoor 3D scenes and does not require additional training in 3D space. Experiments on public datasets and a real robot show the effectiveness of the method and its potential for applications in robot navigation and manipulation.
APA
Lu, S., Chang, H., Jing, E.P., Boularias, A. & Bekris, K.. (2023). OVIR-3D: Open-Vocabulary 3D Instance Retrieval Without Training on 3D Data. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1610-1620 Available from https://proceedings.mlr.press/v229/lu23a.html.

Related Material