Gesture-Informed Robot Assistance via Foundation Models

Li-Heng Lin, Yuchen Cui, Yilun Hao, Fei Xia, Dorsa Sadigh
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3061-3082, 2023.

Abstract

Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them. Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for three table-top manipulation tasks and demonstrate that it is both effective and preferred by users. We further demonstrate GIRAF’s ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved $81%$ success rate on finding the correct plan for tasks in GestureInstruct. Videos and datasets can be found on our project website: https://tinyurl.com/giraf23

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-lin23a, title = {Gesture-Informed Robot Assistance via Foundation Models}, author = {Lin, Li-Heng and Cui, Yuchen and Hao, Yilun and Xia, Fei and Sadigh, Dorsa}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3061--3082}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/lin23a/lin23a.pdf}, url = {https://proceedings.mlr.press/v229/lin23a.html}, abstract = {Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them. Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for three table-top manipulation tasks and demonstrate that it is both effective and preferred by users. We further demonstrate GIRAF’s ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved $81%$ success rate on finding the correct plan for tasks in GestureInstruct. Videos and datasets can be found on our project website: https://tinyurl.com/giraf23} }
Endnote
%0 Conference Paper %T Gesture-Informed Robot Assistance via Foundation Models %A Li-Heng Lin %A Yuchen Cui %A Yilun Hao %A Fei Xia %A Dorsa Sadigh %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-lin23a %I PMLR %P 3061--3082 %U https://proceedings.mlr.press/v229/lin23a.html %V 229 %X Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them. Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for three table-top manipulation tasks and demonstrate that it is both effective and preferred by users. We further demonstrate GIRAF’s ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved $81%$ success rate on finding the correct plan for tasks in GestureInstruct. Videos and datasets can be found on our project website: https://tinyurl.com/giraf23
APA
Lin, L., Cui, Y., Hao, Y., Xia, F. & Sadigh, D.. (2023). Gesture-Informed Robot Assistance via Foundation Models. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3061-3082 Available from https://proceedings.mlr.press/v229/lin23a.html.

Related Material