Stabilize to Act: Learning to Coordinate for Bimanual Manipulation

Jennifer Grannen, Yilin Wu, Brandon Vu, Dorsa Sadigh
Proceedings of The 7th Conference on Robot Learning, PMLR 229:563-576, 2023.

Abstract

Key to rich, dexterous manipulation in the real world is the ability to coordinate control across two hands. However, while the promise afforded by bimanual robotic systems is immense, constructing control policies for dual arm autonomous systems brings inherent difficulties. One such difficulty is the high-dimensionality of the bimanual action space, which adds complexity to both model-based and data-driven methods. We counteract this challenge by drawing inspiration from humans to propose a novel role assignment framework: a stabilizing arm holds an object in place to simplify the environment while an acting arm executes the task. We instantiate this framework with BimanUal Dexterity from Stabilization (BUDS), which uses a learned restabilizing classifier to alternate between updating a learned stabilization position to keep the environment unchanged, and accomplishing the task with an acting policy learned from demonstrations. We evaluate BUDS on four bimanual tasks of varying complexities on real-world robots, such as zipping jackets and cutting vegetables. Given only 20 demonstrations, BUDS achieves $76.9%$ task success across our task suite, and generalizes to out-of-distribution objects within a class with a $52.7%$ success rate. BUDS is $56.0%$ more successful than an unstructured baseline that instead learns a BC stabilizing policy due to the precision required of these complex tasks. Supplementary material and videos can be found at https://tinyurl.com/stabilizetoact.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-grannen23a, title = {Stabilize to Act: Learning to Coordinate for Bimanual Manipulation}, author = {Grannen, Jennifer and Wu, Yilin and Vu, Brandon and Sadigh, Dorsa}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {563--576}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/grannen23a/grannen23a.pdf}, url = {https://proceedings.mlr.press/v229/grannen23a.html}, abstract = {Key to rich, dexterous manipulation in the real world is the ability to coordinate control across two hands. However, while the promise afforded by bimanual robotic systems is immense, constructing control policies for dual arm autonomous systems brings inherent difficulties. One such difficulty is the high-dimensionality of the bimanual action space, which adds complexity to both model-based and data-driven methods. We counteract this challenge by drawing inspiration from humans to propose a novel role assignment framework: a stabilizing arm holds an object in place to simplify the environment while an acting arm executes the task. We instantiate this framework with BimanUal Dexterity from Stabilization (BUDS), which uses a learned restabilizing classifier to alternate between updating a learned stabilization position to keep the environment unchanged, and accomplishing the task with an acting policy learned from demonstrations. We evaluate BUDS on four bimanual tasks of varying complexities on real-world robots, such as zipping jackets and cutting vegetables. Given only 20 demonstrations, BUDS achieves $76.9%$ task success across our task suite, and generalizes to out-of-distribution objects within a class with a $52.7%$ success rate. BUDS is $56.0%$ more successful than an unstructured baseline that instead learns a BC stabilizing policy due to the precision required of these complex tasks. Supplementary material and videos can be found at https://tinyurl.com/stabilizetoact.} }
Endnote
%0 Conference Paper %T Stabilize to Act: Learning to Coordinate for Bimanual Manipulation %A Jennifer Grannen %A Yilin Wu %A Brandon Vu %A Dorsa Sadigh %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-grannen23a %I PMLR %P 563--576 %U https://proceedings.mlr.press/v229/grannen23a.html %V 229 %X Key to rich, dexterous manipulation in the real world is the ability to coordinate control across two hands. However, while the promise afforded by bimanual robotic systems is immense, constructing control policies for dual arm autonomous systems brings inherent difficulties. One such difficulty is the high-dimensionality of the bimanual action space, which adds complexity to both model-based and data-driven methods. We counteract this challenge by drawing inspiration from humans to propose a novel role assignment framework: a stabilizing arm holds an object in place to simplify the environment while an acting arm executes the task. We instantiate this framework with BimanUal Dexterity from Stabilization (BUDS), which uses a learned restabilizing classifier to alternate between updating a learned stabilization position to keep the environment unchanged, and accomplishing the task with an acting policy learned from demonstrations. We evaluate BUDS on four bimanual tasks of varying complexities on real-world robots, such as zipping jackets and cutting vegetables. Given only 20 demonstrations, BUDS achieves $76.9%$ task success across our task suite, and generalizes to out-of-distribution objects within a class with a $52.7%$ success rate. BUDS is $56.0%$ more successful than an unstructured baseline that instead learns a BC stabilizing policy due to the precision required of these complex tasks. Supplementary material and videos can be found at https://tinyurl.com/stabilizetoact.
APA
Grannen, J., Wu, Y., Vu, B. & Sadigh, D.. (2023). Stabilize to Act: Learning to Coordinate for Bimanual Manipulation. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:563-576 Available from https://proceedings.mlr.press/v229/grannen23a.html.

Related Material