Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning

Smitha Milli, Anca D. Dragan
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:925-934, 2020.

Abstract

It is incredibly easy for a system designer to misspecify the objective for an autonomous system (“robot"), thus motivating the desire to have the robot learn the objective from human behavior instead. Recent work has suggested that people have an interest in the robot performing well, and will thus behave pedagogically, choosing actions that are informative to the robot. In turn, robots benefit from interpreting the behavior by accounting for this pedagogy. In this work, we focus on misspecification: we argue that robots might not know whether people are being pedagogic or literal and that it is important to ask which assumption is safer to make. We cast objective learning into the more general form of a common-payoff game between the robot and human, and prove that in any such game literal interpretation is more robust to misspecification. Experiments with human data support our theoretical results and point to the sensitivity of the pedagogic assumption.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-milli20a, title = {Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning}, author = {Milli, Smitha and Dragan, Anca D.}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {925--934}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/milli20a/milli20a.pdf}, url = {https://proceedings.mlr.press/v115/milli20a.html}, abstract = {It is incredibly easy for a system designer to misspecify the objective for an autonomous system (“robot"), thus motivating the desire to have the robot learn the objective from human behavior instead. Recent work has suggested that people have an interest in the robot performing well, and will thus behave pedagogically, choosing actions that are informative to the robot. In turn, robots benefit from interpreting the behavior by accounting for this pedagogy. In this work, we focus on misspecification: we argue that robots might not know whether people are being pedagogic or literal and that it is important to ask which assumption is safer to make. We cast objective learning into the more general form of a common-payoff game between the robot and human, and prove that in any such game literal interpretation is more robust to misspecification. Experiments with human data support our theoretical results and point to the sensitivity of the pedagogic assumption.} }
Endnote
%0 Conference Paper %T Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning %A Smitha Milli %A Anca D. Dragan %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-milli20a %I PMLR %P 925--934 %U https://proceedings.mlr.press/v115/milli20a.html %V 115 %X It is incredibly easy for a system designer to misspecify the objective for an autonomous system (“robot"), thus motivating the desire to have the robot learn the objective from human behavior instead. Recent work has suggested that people have an interest in the robot performing well, and will thus behave pedagogically, choosing actions that are informative to the robot. In turn, robots benefit from interpreting the behavior by accounting for this pedagogy. In this work, we focus on misspecification: we argue that robots might not know whether people are being pedagogic or literal and that it is important to ask which assumption is safer to make. We cast objective learning into the more general form of a common-payoff game between the robot and human, and prove that in any such game literal interpretation is more robust to misspecification. Experiments with human data support our theoretical results and point to the sensitivity of the pedagogic assumption.
APA
Milli, S. & Dragan, A.D.. (2020). Literal or Pedagogic Human? Analyzing Human Model Misspecification in Objective Learning. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:925-934 Available from https://proceedings.mlr.press/v115/milli20a.html.

Related Material