Learning under Misspecified Objective Spaces

Andreea Bobu, Andrea Bajcsy, Jaime F. Fisac, Anca D. Dragan
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:796-805, 2018.

Abstract

Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human’s desired objective lies within the robot’s hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot’s task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human’s correction is for the robot’s hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a robot manipulator.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-bobu18a, title = {Learning under Misspecified Objective Spaces}, author = {Bobu, Andreea and Bajcsy, Andrea and Fisac, Jaime F. and Dragan, Anca D.}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {796--805}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/bobu18a/bobu18a.pdf}, url = {https://proceedings.mlr.press/v87/bobu18a.html}, abstract = {Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human’s desired objective lies within the robot’s hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot’s task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human’s correction is for the robot’s hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a robot manipulator. } }
Endnote
%0 Conference Paper %T Learning under Misspecified Objective Spaces %A Andreea Bobu %A Andrea Bajcsy %A Jaime F. Fisac %A Anca D. Dragan %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-bobu18a %I PMLR %P 796--805 %U https://proceedings.mlr.press/v87/bobu18a.html %V 87 %X Learning robot objective functions from human input has become increasingly important, but state-of-the-art techniques assume that the human’s desired objective lies within the robot’s hypothesis space. When this is not true, even methods that keep track of uncertainty over the objective fail because they reason about which hypothesis might be correct, and not whether any of the hypotheses are correct. We focus specifically on learning from physical human corrections during the robot’s task execution, where not having a rich enough hypothesis space leads to the robot updating its objective in ways that the person did not actually intend. We observe that such corrections appear irrelevant to the robot, because they are not the best way of achieving any of the candidate objectives. Instead of naively trusting and learning from every human interaction, we propose robots learn conservatively by reasoning in real time about how relevant the human’s correction is for the robot’s hypothesis space. We test our inference method in an experiment with human interaction data, and demonstrate that this alleviates unintended learning in an in-person user study with a robot manipulator.
APA
Bobu, A., Bajcsy, A., Fisac, J.F. & Dragan, A.D.. (2018). Learning under Misspecified Objective Spaces. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:796-805 Available from https://proceedings.mlr.press/v87/bobu18a.html.

Related Material