Selective Object Rearrangement in Clutter

Bingjie Tang, Gaurav S. Sukhatme
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1001-1010, 2023.

Abstract

We propose an image-based, learned method for selective tabletop object rearrangement in clutter using a parallel jaw gripper. Our method consists of three stages: graph-based object sequencing (which object to move), feature-based action selection (whether to push or grasp, and at what position and orientation) and a visual correspondence-based placement policy (where to place a grasped object). Experiments show that this decomposition works well in challenging settings requiring the robot to begin with an initially cluttered scene, selecting only the objects that need to be rearranged while discarding others, and dealing with cases where the goal location for an object is already occupied – making it the first system to address all these concurrently in a purely image-based setting. We also achieve an $\sim$ 8% improvement in task success rate over the previously best reported result that handles both translation and orientation in less restrictive (un-cluttered, non-selective) settings. We demonstrate zero-shot transfer of our system solely trained in simulation to a real robot selectively rearranging up to everyday objects, many unseen during learning, on a crowded tabletop. Videos:https://sites.google.com/view/selective-rearrangement

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-tang23a, title = {Selective Object Rearrangement in Clutter}, author = {Tang, Bingjie and Sukhatme, Gaurav S.}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1001--1010}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/tang23a/tang23a.pdf}, url = {https://proceedings.mlr.press/v205/tang23a.html}, abstract = {We propose an image-based, learned method for selective tabletop object rearrangement in clutter using a parallel jaw gripper. Our method consists of three stages: graph-based object sequencing (which object to move), feature-based action selection (whether to push or grasp, and at what position and orientation) and a visual correspondence-based placement policy (where to place a grasped object). Experiments show that this decomposition works well in challenging settings requiring the robot to begin with an initially cluttered scene, selecting only the objects that need to be rearranged while discarding others, and dealing with cases where the goal location for an object is already occupied – making it the first system to address all these concurrently in a purely image-based setting. We also achieve an $\sim$ 8% improvement in task success rate over the previously best reported result that handles both translation and orientation in less restrictive (un-cluttered, non-selective) settings. We demonstrate zero-shot transfer of our system solely trained in simulation to a real robot selectively rearranging up to everyday objects, many unseen during learning, on a crowded tabletop. Videos:https://sites.google.com/view/selective-rearrangement} }
Endnote
%0 Conference Paper %T Selective Object Rearrangement in Clutter %A Bingjie Tang %A Gaurav S. Sukhatme %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-tang23a %I PMLR %P 1001--1010 %U https://proceedings.mlr.press/v205/tang23a.html %V 205 %X We propose an image-based, learned method for selective tabletop object rearrangement in clutter using a parallel jaw gripper. Our method consists of three stages: graph-based object sequencing (which object to move), feature-based action selection (whether to push or grasp, and at what position and orientation) and a visual correspondence-based placement policy (where to place a grasped object). Experiments show that this decomposition works well in challenging settings requiring the robot to begin with an initially cluttered scene, selecting only the objects that need to be rearranged while discarding others, and dealing with cases where the goal location for an object is already occupied – making it the first system to address all these concurrently in a purely image-based setting. We also achieve an $\sim$ 8% improvement in task success rate over the previously best reported result that handles both translation and orientation in less restrictive (un-cluttered, non-selective) settings. We demonstrate zero-shot transfer of our system solely trained in simulation to a real robot selectively rearranging up to everyday objects, many unseen during learning, on a crowded tabletop. Videos:https://sites.google.com/view/selective-rearrangement
APA
Tang, B. & Sukhatme, G.S.. (2023). Selective Object Rearrangement in Clutter. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1001-1010 Available from https://proceedings.mlr.press/v205/tang23a.html.

Related Material