Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks

Yifan Shen, Xiaoyu Mo, Vytas Krisciunas, David Hanson, Bertram E. Shi
Proceedings of The 1st Gaze Meets ML workshop, PMLR 210:140-164, 2023.

Abstract

To provide effective guidance to a human agent performing hierarchical tasks, a robot must determine the level at which to provide guidance. This relies on estimating the agent’s intention at each level of the hierarchy. Unfortunately, observations of task-related movements only provide direct information about intention at the lowest level. In addition, lower level tasks may be shared. The resulting ambiguity impairs timely estimation of higher level intent. This can be resolved by incorporating observations of secondary behaviors like gaze. We propose a probabilistic framework enabling robot guidance in hierarchical tasks via intention estimation from observations of both task-related movements and eye gaze. Experiments with a virtual humanoid robot demonstrate that gaze is a very powerful cue that largely compensates for simplifying assumptions made in modelling task-related movements, enabling a robot controlled by our framework to nearly match the performance of a human wizard. We examine the effect of gaze in improving both the precision and timeliness of guidance cue generation, finding that while both improve with gaze, improvements in timeliness are more significant. Our results suggest that gaze observations are critical in achieving natural and fluid human-robot collaboration, which may enable human agents to undertake significantly more complex tasks and perform them more safely and effectively, than possible without guidance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v210-shen23a, title = {Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks}, author = {Yifan, Shen and Mo, Xiaoyu and Krisciunas, Vytas and Hanson, David and Shi, Bertram E.}, booktitle = {Proceedings of The 1st Gaze Meets ML workshop}, pages = {140--164}, year = {2023}, editor = {Lourentzou, Ismini and Wu, Joy and Kashyap, Satyananda and Karargyris, Alexandros and Celi, Leo Anthony and Kawas, Ban and Talathi, Sachin}, volume = {210}, series = {Proceedings of Machine Learning Research}, month = {03 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v210/shen23a/shen23a.pdf}, url = {https://proceedings.mlr.press/v210/shen23a.html}, abstract = {To provide effective guidance to a human agent performing hierarchical tasks, a robot must determine the level at which to provide guidance. This relies on estimating the agent’s intention at each level of the hierarchy. Unfortunately, observations of task-related movements only provide direct information about intention at the lowest level. In addition, lower level tasks may be shared. The resulting ambiguity impairs timely estimation of higher level intent. This can be resolved by incorporating observations of secondary behaviors like gaze. We propose a probabilistic framework enabling robot guidance in hierarchical tasks via intention estimation from observations of both task-related movements and eye gaze. Experiments with a virtual humanoid robot demonstrate that gaze is a very powerful cue that largely compensates for simplifying assumptions made in modelling task-related movements, enabling a robot controlled by our framework to nearly match the performance of a human wizard. We examine the effect of gaze in improving both the precision and timeliness of guidance cue generation, finding that while both improve with gaze, improvements in timeliness are more significant. Our results suggest that gaze observations are critical in achieving natural and fluid human-robot collaboration, which may enable human agents to undertake significantly more complex tasks and perform them more safely and effectively, than possible without guidance.} }
Endnote
%0 Conference Paper %T Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks %A Yifan Shen %A Xiaoyu Mo %A Vytas Krisciunas %A David Hanson %A Bertram E. Shi %B Proceedings of The 1st Gaze Meets ML workshop %C Proceedings of Machine Learning Research %D 2023 %E Ismini Lourentzou %E Joy Wu %E Satyananda Kashyap %E Alexandros Karargyris %E Leo Anthony Celi %E Ban Kawas %E Sachin Talathi %F pmlr-v210-shen23a %I PMLR %P 140--164 %U https://proceedings.mlr.press/v210/shen23a.html %V 210 %X To provide effective guidance to a human agent performing hierarchical tasks, a robot must determine the level at which to provide guidance. This relies on estimating the agent’s intention at each level of the hierarchy. Unfortunately, observations of task-related movements only provide direct information about intention at the lowest level. In addition, lower level tasks may be shared. The resulting ambiguity impairs timely estimation of higher level intent. This can be resolved by incorporating observations of secondary behaviors like gaze. We propose a probabilistic framework enabling robot guidance in hierarchical tasks via intention estimation from observations of both task-related movements and eye gaze. Experiments with a virtual humanoid robot demonstrate that gaze is a very powerful cue that largely compensates for simplifying assumptions made in modelling task-related movements, enabling a robot controlled by our framework to nearly match the performance of a human wizard. We examine the effect of gaze in improving both the precision and timeliness of guidance cue generation, finding that while both improve with gaze, improvements in timeliness are more significant. Our results suggest that gaze observations are critical in achieving natural and fluid human-robot collaboration, which may enable human agents to undertake significantly more complex tasks and perform them more safely and effectively, than possible without guidance.
APA
Shen, Y., Mo, X., Krisciunas, V., Hanson, D. & Shi, B.E.. (2023). Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks. Proceedings of The 1st Gaze Meets ML workshop, in Proceedings of Machine Learning Research 210:140-164 Available from https://proceedings.mlr.press/v210/shen23a.html.

Related Material