REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation

Zheyuan Hu, Aaron Rovinsky, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1930-1949, 2023.

Abstract

Dexterous manipulation tasks involving contact-rich interactions pose a significant challenge for both model-based control systems and imitation learning algorithms. The complexity arises from the need for multi-fingered robotic hands to dynamically establish and break contacts, balance forces on the non-prehensile object, and control a high number of degrees of freedom. Reinforcement learning (RL) offers a promising approach due to its general applicability and capacity to autonomously acquire optimal manipulation strategies. However, its real-world application is often hindered by the necessity to generate a large number of samples, reset the environment, and obtain reward signals. In this work, we introduce an efficient system for learning dexterous manipulation skills with RL to alleviate these challenges. The main idea of our approach is the integration of recent advancements in sample-efficient RL and replay buffer bootstrapping. This unique combination allows us to utilize data from different tasks or objects as a starting point for training new tasks, significantly improving learning efficiency. Additionally, our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy and learned reward functions, to eliminate the need for manual reset and reward engineering. We show the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisitions of intricate manipulation skills in the real world on a four-fingered robotic hand. https://sites.google.com/view/reboot-dexterous

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-hu23a, title = {REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation}, author = {Hu, Zheyuan and Rovinsky, Aaron and Luo, Jianlan and Kumar, Vikash and Gupta, Abhishek and Levine, Sergey}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1930--1949}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/hu23a/hu23a.pdf}, url = {https://proceedings.mlr.press/v229/hu23a.html}, abstract = {Dexterous manipulation tasks involving contact-rich interactions pose a significant challenge for both model-based control systems and imitation learning algorithms. The complexity arises from the need for multi-fingered robotic hands to dynamically establish and break contacts, balance forces on the non-prehensile object, and control a high number of degrees of freedom. Reinforcement learning (RL) offers a promising approach due to its general applicability and capacity to autonomously acquire optimal manipulation strategies. However, its real-world application is often hindered by the necessity to generate a large number of samples, reset the environment, and obtain reward signals. In this work, we introduce an efficient system for learning dexterous manipulation skills with RL to alleviate these challenges. The main idea of our approach is the integration of recent advancements in sample-efficient RL and replay buffer bootstrapping. This unique combination allows us to utilize data from different tasks or objects as a starting point for training new tasks, significantly improving learning efficiency. Additionally, our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy and learned reward functions, to eliminate the need for manual reset and reward engineering. We show the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisitions of intricate manipulation skills in the real world on a four-fingered robotic hand. https://sites.google.com/view/reboot-dexterous} }
Endnote
%0 Conference Paper %T REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation %A Zheyuan Hu %A Aaron Rovinsky %A Jianlan Luo %A Vikash Kumar %A Abhishek Gupta %A Sergey Levine %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-hu23a %I PMLR %P 1930--1949 %U https://proceedings.mlr.press/v229/hu23a.html %V 229 %X Dexterous manipulation tasks involving contact-rich interactions pose a significant challenge for both model-based control systems and imitation learning algorithms. The complexity arises from the need for multi-fingered robotic hands to dynamically establish and break contacts, balance forces on the non-prehensile object, and control a high number of degrees of freedom. Reinforcement learning (RL) offers a promising approach due to its general applicability and capacity to autonomously acquire optimal manipulation strategies. However, its real-world application is often hindered by the necessity to generate a large number of samples, reset the environment, and obtain reward signals. In this work, we introduce an efficient system for learning dexterous manipulation skills with RL to alleviate these challenges. The main idea of our approach is the integration of recent advancements in sample-efficient RL and replay buffer bootstrapping. This unique combination allows us to utilize data from different tasks or objects as a starting point for training new tasks, significantly improving learning efficiency. Additionally, our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy and learned reward functions, to eliminate the need for manual reset and reward engineering. We show the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisitions of intricate manipulation skills in the real world on a four-fingered robotic hand. https://sites.google.com/view/reboot-dexterous
APA
Hu, Z., Rovinsky, A., Luo, J., Kumar, V., Gupta, A. & Levine, S.. (2023). REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1930-1949 Available from https://proceedings.mlr.press/v229/hu23a.html.

Related Material