The EMPATHIC Framework for Task Learning from Implicit Human Feedback

Yuchen Cui, Qiping Zhang, Brad Knox, Alessandro Allievi, Peter Stone, Scott Niekum
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:604-626, 2021.

Abstract

Reactions such as gestures, facial expressions, and vocalizations are an abundant, naturally occurring channel of information that humans provide during interactions. A robot or other agent could leverage an understanding of such implicit human feedback to improve its task performance at no cost to the human. This approach contrasts with common agent teaching methods based on demonstrations, critiques, or other guidance that need to be attentively and intentionally provided. In this paper, we first define the general problem of learning from implicit human feedback and then propose to address this problem through a novel data-driven framework, EMPATHIC. This two-stage method consists of (1) mapping implicit human feedback to relevant task statistics such as reward, optimality, and advantage; and (2) using such a mapping to learn a task. We instantiate the first stage and three second-stage evaluations of the learned mapping. To do so, we collect a dataset of human facial reactions while subjects observe an agent execute a sub-optimal policy for a prescribed training task. We train a deep neural network on this data and demonstrate its ability to (1) infer relative reward ranking of events in the training task from prerecorded human facial reactions; (2) improve the policy of an agent in the training task using live human facial reactions; and (3) transfer to a novel domain in which it evaluates robot manipulation trajectories.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-cui21a, title = {The EMPATHIC Framework for Task Learning from Implicit Human Feedback}, author = {Cui, Yuchen and Zhang, Qiping and Knox, Brad and Allievi, Alessandro and Stone, Peter and Niekum, Scott}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {604--626}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/cui21a/cui21a.pdf}, url = {https://proceedings.mlr.press/v155/cui21a.html}, abstract = {Reactions such as gestures, facial expressions, and vocalizations are an abundant, naturally occurring channel of information that humans provide during interactions. A robot or other agent could leverage an understanding of such implicit human feedback to improve its task performance at no cost to the human. This approach contrasts with common agent teaching methods based on demonstrations, critiques, or other guidance that need to be attentively and intentionally provided. In this paper, we first define the general problem of learning from implicit human feedback and then propose to address this problem through a novel data-driven framework, EMPATHIC. This two-stage method consists of (1) mapping implicit human feedback to relevant task statistics such as reward, optimality, and advantage; and (2) using such a mapping to learn a task. We instantiate the first stage and three second-stage evaluations of the learned mapping. To do so, we collect a dataset of human facial reactions while subjects observe an agent execute a sub-optimal policy for a prescribed training task. We train a deep neural network on this data and demonstrate its ability to (1) infer relative reward ranking of events in the training task from prerecorded human facial reactions; (2) improve the policy of an agent in the training task using live human facial reactions; and (3) transfer to a novel domain in which it evaluates robot manipulation trajectories.} }
Endnote
%0 Conference Paper %T The EMPATHIC Framework for Task Learning from Implicit Human Feedback %A Yuchen Cui %A Qiping Zhang %A Brad Knox %A Alessandro Allievi %A Peter Stone %A Scott Niekum %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-cui21a %I PMLR %P 604--626 %U https://proceedings.mlr.press/v155/cui21a.html %V 155 %X Reactions such as gestures, facial expressions, and vocalizations are an abundant, naturally occurring channel of information that humans provide during interactions. A robot or other agent could leverage an understanding of such implicit human feedback to improve its task performance at no cost to the human. This approach contrasts with common agent teaching methods based on demonstrations, critiques, or other guidance that need to be attentively and intentionally provided. In this paper, we first define the general problem of learning from implicit human feedback and then propose to address this problem through a novel data-driven framework, EMPATHIC. This two-stage method consists of (1) mapping implicit human feedback to relevant task statistics such as reward, optimality, and advantage; and (2) using such a mapping to learn a task. We instantiate the first stage and three second-stage evaluations of the learned mapping. To do so, we collect a dataset of human facial reactions while subjects observe an agent execute a sub-optimal policy for a prescribed training task. We train a deep neural network on this data and demonstrate its ability to (1) infer relative reward ranking of events in the training task from prerecorded human facial reactions; (2) improve the policy of an agent in the training task using live human facial reactions; and (3) transfer to a novel domain in which it evaluates robot manipulation trajectories.
APA
Cui, Y., Zhang, Q., Knox, B., Allievi, A., Stone, P. & Niekum, S.. (2021). The EMPATHIC Framework for Task Learning from Implicit Human Feedback. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:604-626 Available from https://proceedings.mlr.press/v155/cui21a.html.

Related Material