Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter

Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Siegwart, Juan Nieto
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1602-1611, 2021.

Abstract

General robot grasping in clutter requires the ability to synthesize grasps that work for previously unseen objects and that are also robust to physical interactions, such as collisions with other objects in the scene. In this work, we design and train a network that predicts 6 DOF grasps from 3D scene information gathered from an on-board sensor such as a wrist-mounted depth camera. Our proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume. We show that our approach can plan grasps in only 10 ms and is able to clear 92% of the objects in real-world clutter removal experiments without the need for explicit collision checking. The real-time capability opens up the possibility for closed-loop grasp planning, allowing robots to handle disturbances, recover from errors and provide increased robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-breyer21a, title = {Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter}, author = {Breyer, Michel and Chung, Jen Jen and Ott, Lionel and Siegwart, Roland and Nieto, Juan}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1602--1611}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/breyer21a/breyer21a.pdf}, url = {https://proceedings.mlr.press/v155/breyer21a.html}, abstract = {General robot grasping in clutter requires the ability to synthesize grasps that work for previously unseen objects and that are also robust to physical interactions, such as collisions with other objects in the scene. In this work, we design and train a network that predicts 6 DOF grasps from 3D scene information gathered from an on-board sensor such as a wrist-mounted depth camera. Our proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume. We show that our approach can plan grasps in only 10 ms and is able to clear 92% of the objects in real-world clutter removal experiments without the need for explicit collision checking. The real-time capability opens up the possibility for closed-loop grasp planning, allowing robots to handle disturbances, recover from errors and provide increased robustness.} }
Endnote
%0 Conference Paper %T Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter %A Michel Breyer %A Jen Jen Chung %A Lionel Ott %A Roland Siegwart %A Juan Nieto %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-breyer21a %I PMLR %P 1602--1611 %U https://proceedings.mlr.press/v155/breyer21a.html %V 155 %X General robot grasping in clutter requires the ability to synthesize grasps that work for previously unseen objects and that are also robust to physical interactions, such as collisions with other objects in the scene. In this work, we design and train a network that predicts 6 DOF grasps from 3D scene information gathered from an on-board sensor such as a wrist-mounted depth camera. Our proposed Volumetric Grasping Network (VGN) accepts a Truncated Signed Distance Function (TSDF) representation of the scene and directly outputs the predicted grasp quality and the associated gripper orientation and opening width for each voxel in the queried 3D volume. We show that our approach can plan grasps in only 10 ms and is able to clear 92% of the objects in real-world clutter removal experiments without the need for explicit collision checking. The real-time capability opens up the possibility for closed-loop grasp planning, allowing robots to handle disturbances, recover from errors and provide increased robustness.
APA
Breyer, M., Chung, J.J., Ott, L., Siegwart, R. & Nieto, J.. (2021). Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1602-1611 Available from https://proceedings.mlr.press/v155/breyer21a.html.

Related Material