HomeRobot: Open-Vocabulary Mobile Manipulation

Sriram Yenamandra, Arun Ramachandran, Karmesh Yadav, Austin S. Wang, Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander Clegg, John M. Turner, Zsolt Kira, Manolis Savva, Angel X. Chang, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1975-2011, 2023.

Abstract

HomeRobot (noun): An affordable compliant robot that navigates homes and manipulates a wide range of objects in order to complete everyday tasks. Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location. This is a foundational challenge for robots to be useful assistants in human environments, because it involves tackling sub-problems from across robotics: perception, language understanding, navigation, and manipulation are all essential to OVMM. In addition, integration of the solutions to these sub-problems poses its own substantial challenges. To drive research in this area, we introduce the HomeRobot OVMM benchmark, where an agent navigates household environments to grasp novel objects and place them on target receptacles. HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch to encourage replication of real-world experiments across labs. We implement both reinforcement learning and heuristic (model-based) baselines and show evidence of sim-to-real transfer. Our baselines achieve a $20%$ success rate in the real world; our experiments identify ways future research work improve performance. See videos on our website: https://home-robot-ovmm.github.io/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-yenamandra23a, title = {HomeRobot: Open-Vocabulary Mobile Manipulation}, author = {Yenamandra, Sriram and Ramachandran, Arun and Yadav, Karmesh and Wang, Austin S. and Khanna, Mukul and Gervet, Theophile and Yang, Tsung-Yen and Jain, Vidhi and Clegg, Alexander and Turner, John M. and Kira, Zsolt and Savva, Manolis and Chang, Angel X. and Chaplot, Devendra Singh and Batra, Dhruv and Mottaghi, Roozbeh and Bisk, Yonatan and Paxton, Chris}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1975--2011}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/yenamandra23a/yenamandra23a.pdf}, url = {https://proceedings.mlr.press/v229/yenamandra23a.html}, abstract = {HomeRobot (noun): An affordable compliant robot that navigates homes and manipulates a wide range of objects in order to complete everyday tasks. Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location. This is a foundational challenge for robots to be useful assistants in human environments, because it involves tackling sub-problems from across robotics: perception, language understanding, navigation, and manipulation are all essential to OVMM. In addition, integration of the solutions to these sub-problems poses its own substantial challenges. To drive research in this area, we introduce the HomeRobot OVMM benchmark, where an agent navigates household environments to grasp novel objects and place them on target receptacles. HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch to encourage replication of real-world experiments across labs. We implement both reinforcement learning and heuristic (model-based) baselines and show evidence of sim-to-real transfer. Our baselines achieve a $20%$ success rate in the real world; our experiments identify ways future research work improve performance. See videos on our website: https://home-robot-ovmm.github.io/.} }
Endnote
%0 Conference Paper %T HomeRobot: Open-Vocabulary Mobile Manipulation %A Sriram Yenamandra %A Arun Ramachandran %A Karmesh Yadav %A Austin S. Wang %A Mukul Khanna %A Theophile Gervet %A Tsung-Yen Yang %A Vidhi Jain %A Alexander Clegg %A John M. Turner %A Zsolt Kira %A Manolis Savva %A Angel X. Chang %A Devendra Singh Chaplot %A Dhruv Batra %A Roozbeh Mottaghi %A Yonatan Bisk %A Chris Paxton %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-yenamandra23a %I PMLR %P 1975--2011 %U https://proceedings.mlr.press/v229/yenamandra23a.html %V 229 %X HomeRobot (noun): An affordable compliant robot that navigates homes and manipulates a wide range of objects in order to complete everyday tasks. Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location. This is a foundational challenge for robots to be useful assistants in human environments, because it involves tackling sub-problems from across robotics: perception, language understanding, navigation, and manipulation are all essential to OVMM. In addition, integration of the solutions to these sub-problems poses its own substantial challenges. To drive research in this area, we introduce the HomeRobot OVMM benchmark, where an agent navigates household environments to grasp novel objects and place them on target receptacles. HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch to encourage replication of real-world experiments across labs. We implement both reinforcement learning and heuristic (model-based) baselines and show evidence of sim-to-real transfer. Our baselines achieve a $20%$ success rate in the real world; our experiments identify ways future research work improve performance. See videos on our website: https://home-robot-ovmm.github.io/.
APA
Yenamandra, S., Ramachandran, A., Yadav, K., Wang, A.S., Khanna, M., Gervet, T., Yang, T., Jain, V., Clegg, A., Turner, J.M., Kira, Z., Savva, M., Chang, A.X., Chaplot, D.S., Batra, D., Mottaghi, R., Bisk, Y. & Paxton, C.. (2023). HomeRobot: Open-Vocabulary Mobile Manipulation. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1975-2011 Available from https://proceedings.mlr.press/v229/yenamandra23a.html.

Related Material