The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation

Christopher Xie, Yu Xiang, Arsalan Mousavian, Dieter Fox
; Proceedings of the Conference on Robot Learning, PMLR 100:1369-1378, 2020.

Abstract

In order to function in unstructured environments, robots need the ability to recognize unseen novel objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. We propose a novel method that separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. Our method is comprised of two stages where the first stage operates only on depth to produce rough initial masks, and the second stage refines these masks with RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method, trained on this dataset, can produce sharp and accurate masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping. Code, models and video can be found at the project website1.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-xie20b, title = {The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation}, author = {Xie, Christopher and Xiang, Yu and Mousavian, Arsalan and Fox, Dieter}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {1369--1378}, year = {2020}, editor = {Leslie Pack Kaelbling and Danica Kragic and Komei Sugiura}, volume = {100}, series = {Proceedings of Machine Learning Research}, address = {}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/xie20b/xie20b.pdf}, url = {http://proceedings.mlr.press/v100/xie20b.html}, abstract = {In order to function in unstructured environments, robots need the ability to recognize unseen novel objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. We propose a novel method that separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. Our method is comprised of two stages where the first stage operates only on depth to produce rough initial masks, and the second stage refines these masks with RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method, trained on this dataset, can produce sharp and accurate masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping. Code, models and video can be found at the project website1.} }
Endnote
%0 Conference Paper %T The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation %A Christopher Xie %A Yu Xiang %A Arsalan Mousavian %A Dieter Fox %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-xie20b %I PMLR %J Proceedings of Machine Learning Research %P 1369--1378 %U http://proceedings.mlr.press %V 100 %W PMLR %X In order to function in unstructured environments, robots need the ability to recognize unseen novel objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. We propose a novel method that separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. Our method is comprised of two stages where the first stage operates only on depth to produce rough initial masks, and the second stage refines these masks with RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method, trained on this dataset, can produce sharp and accurate masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping. Code, models and video can be found at the project website1.
APA
Xie, C., Xiang, Y., Mousavian, A. & Fox, D.. (2020). The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation. Proceedings of the Conference on Robot Learning, in PMLR 100:1369-1378

Related Material