Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation

Charles Sun, Jȩdrzej Orbik, Coline Manon Devin, Brian H. Yang, Abhishek Gupta, Glen Berseth, Sergey Levine
Proceedings of the 5th Conference on Robot Learning, PMLR 164:308-319, 2022.

Abstract

We study how robots can autonomously learn skills that require a combination of navigation and grasping. Learning robotic skills in the real world remains challenging without large scale data collection and supervision. Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions. Specifically, our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation, without human intervention, and without access to privileged information, such as maps, objects positions, or a global view of the environment. Our method employs a modularized policy with components for manipulation and navigation, where uncertainty over the manipulation success drives exploration for the navigation controller, and the manipulation module provides rewards for navigation. We evaluate our method on a room cleanup task, where the robot must navigate to and pick up items of scattered on the floor. After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-sun22a, title = {Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation}, author = {Sun, Charles and Orbik, J\c{e}drzej and Devin, Coline Manon and Yang, Brian H. and Gupta, Abhishek and Berseth, Glen and Levine, Sergey}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {308--319}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/sun22a/sun22a.pdf}, url = {https://proceedings.mlr.press/v164/sun22a.html}, abstract = {We study how robots can autonomously learn skills that require a combination of navigation and grasping. Learning robotic skills in the real world remains challenging without large scale data collection and supervision. Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions. Specifically, our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation, without human intervention, and without access to privileged information, such as maps, objects positions, or a global view of the environment. Our method employs a modularized policy with components for manipulation and navigation, where uncertainty over the manipulation success drives exploration for the navigation controller, and the manipulation module provides rewards for navigation. We evaluate our method on a room cleanup task, where the robot must navigate to and pick up items of scattered on the floor. After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.} }
Endnote
%0 Conference Paper %T Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation %A Charles Sun %A Jȩdrzej Orbik %A Coline Manon Devin %A Brian H. Yang %A Abhishek Gupta %A Glen Berseth %A Sergey Levine %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-sun22a %I PMLR %P 308--319 %U https://proceedings.mlr.press/v164/sun22a.html %V 164 %X We study how robots can autonomously learn skills that require a combination of navigation and grasping. Learning robotic skills in the real world remains challenging without large scale data collection and supervision. Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions. Specifically, our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation, without human intervention, and without access to privileged information, such as maps, objects positions, or a global view of the environment. Our method employs a modularized policy with components for manipulation and navigation, where uncertainty over the manipulation success drives exploration for the navigation controller, and the manipulation module provides rewards for navigation. We evaluate our method on a room cleanup task, where the robot must navigate to and pick up items of scattered on the floor. After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training.
APA
Sun, C., Orbik, J., Devin, C.M., Yang, B.H., Gupta, A., Berseth, G. & Levine, S.. (2022). Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:308-319 Available from https://proceedings.mlr.press/v164/sun22a.html.

Related Material