Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects

Jeffrey Ichnowski, Yahav Avigal, Justin Kerr, Ken Goldberg
Proceedings of the 5th Conference on Robot Learning, PMLR 164:526-536, 2022.

Abstract

The ability to grasp and manipulate transparent objects is a major challenge for robots. Existing depth cameras have difficulty detecting, localizing, and inferring the geometry of such objects. We propose using neural radiance fields (NeRF) to detect, localize, and infer the geometry of transparent objects with sufficient accuracy to find and grasp them securely. We leverage NeRF’s view-independent learned density, place lights to increase specular reflections, and perform a transparency-aware depth-rendering that we feed into the Dex-Net grasp planner. We show how additional lights create specular reflections that improve the quality of the depth map, and test a setup for a robot workcell equipped with an array of cameras to perform transparent object manipulation. We also create synthetic and real datasets of transparent objects in real-world settings, including singulated objects, cluttered tables, and the top rack of a dishwasher. In each setting we show that NeRF and Dex-Net are able to reliably compute robust grasps on transparent objects, achieving 90% and 100% grasp-success rates in physical experiments on an ABB YuMi, on objects where baseline methods fail.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-ichnowski22a, title = {Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects}, author = {Ichnowski, Jeffrey and Avigal, Yahav and Kerr, Justin and Goldberg, Ken}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {526--536}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/ichnowski22a/ichnowski22a.pdf}, url = {https://proceedings.mlr.press/v164/ichnowski22a.html}, abstract = {The ability to grasp and manipulate transparent objects is a major challenge for robots. Existing depth cameras have difficulty detecting, localizing, and inferring the geometry of such objects. We propose using neural radiance fields (NeRF) to detect, localize, and infer the geometry of transparent objects with sufficient accuracy to find and grasp them securely. We leverage NeRF’s view-independent learned density, place lights to increase specular reflections, and perform a transparency-aware depth-rendering that we feed into the Dex-Net grasp planner. We show how additional lights create specular reflections that improve the quality of the depth map, and test a setup for a robot workcell equipped with an array of cameras to perform transparent object manipulation. We also create synthetic and real datasets of transparent objects in real-world settings, including singulated objects, cluttered tables, and the top rack of a dishwasher. In each setting we show that NeRF and Dex-Net are able to reliably compute robust grasps on transparent objects, achieving 90% and 100% grasp-success rates in physical experiments on an ABB YuMi, on objects where baseline methods fail.} }
Endnote
%0 Conference Paper %T Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects %A Jeffrey Ichnowski %A Yahav Avigal %A Justin Kerr %A Ken Goldberg %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-ichnowski22a %I PMLR %P 526--536 %U https://proceedings.mlr.press/v164/ichnowski22a.html %V 164 %X The ability to grasp and manipulate transparent objects is a major challenge for robots. Existing depth cameras have difficulty detecting, localizing, and inferring the geometry of such objects. We propose using neural radiance fields (NeRF) to detect, localize, and infer the geometry of transparent objects with sufficient accuracy to find and grasp them securely. We leverage NeRF’s view-independent learned density, place lights to increase specular reflections, and perform a transparency-aware depth-rendering that we feed into the Dex-Net grasp planner. We show how additional lights create specular reflections that improve the quality of the depth map, and test a setup for a robot workcell equipped with an array of cameras to perform transparent object manipulation. We also create synthetic and real datasets of transparent objects in real-world settings, including singulated objects, cluttered tables, and the top rack of a dishwasher. In each setting we show that NeRF and Dex-Net are able to reliably compute robust grasps on transparent objects, achieving 90% and 100% grasp-success rates in physical experiments on an ABB YuMi, on objects where baseline methods fail.
APA
Ichnowski, J., Avigal, Y., Kerr, J. & Goldberg, K.. (2022). Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:526-536 Available from https://proceedings.mlr.press/v164/ichnowski22a.html.

Related Material