Learning Reward Functions from Scale Feedback

Nils Wilde, Erdem Biyik, Dorsa Sadigh, Stephen L. Smith
Proceedings of the 5th Conference on Robot Learning, PMLR 164:353-362, 2022.

Abstract

Today’s robots are increasingly interacting with people and need to efficiently learn inexperienced user’s preferences. A common framework is to iteratively query the user about which of two presented robot trajectories they prefer. While this minimizes the users effort, a strict choice does not yield any information on how much one trajectory is preferred. We propose scale feedback, where the user utilizes a slider to give more nuanced information. We introduce a probabilistic model on how users would provide feedback and derive a learning framework for the robot. We demonstrate the performance benefit of slider feedback in simulations, and validate our approach in two user studies suggesting that scale feedback enables more effective learning in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-wilde22a, title = {Learning Reward Functions from Scale Feedback}, author = {Wilde, Nils and Biyik, Erdem and Sadigh, Dorsa and Smith, Stephen L.}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {353--362}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/wilde22a/wilde22a.pdf}, url = {https://proceedings.mlr.press/v164/wilde22a.html}, abstract = {Today’s robots are increasingly interacting with people and need to efficiently learn inexperienced user’s preferences. A common framework is to iteratively query the user about which of two presented robot trajectories they prefer. While this minimizes the users effort, a strict choice does not yield any information on how much one trajectory is preferred. We propose scale feedback, where the user utilizes a slider to give more nuanced information. We introduce a probabilistic model on how users would provide feedback and derive a learning framework for the robot. We demonstrate the performance benefit of slider feedback in simulations, and validate our approach in two user studies suggesting that scale feedback enables more effective learning in practice.} }
Endnote
%0 Conference Paper %T Learning Reward Functions from Scale Feedback %A Nils Wilde %A Erdem Biyik %A Dorsa Sadigh %A Stephen L. Smith %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-wilde22a %I PMLR %P 353--362 %U https://proceedings.mlr.press/v164/wilde22a.html %V 164 %X Today’s robots are increasingly interacting with people and need to efficiently learn inexperienced user’s preferences. A common framework is to iteratively query the user about which of two presented robot trajectories they prefer. While this minimizes the users effort, a strict choice does not yield any information on how much one trajectory is preferred. We propose scale feedback, where the user utilizes a slider to give more nuanced information. We introduce a probabilistic model on how users would provide feedback and derive a learning framework for the robot. We demonstrate the performance benefit of slider feedback in simulations, and validate our approach in two user studies suggesting that scale feedback enables more effective learning in practice.
APA
Wilde, N., Biyik, E., Sadigh, D. & Smith, S.L.. (2022). Learning Reward Functions from Scale Feedback. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:353-362 Available from https://proceedings.mlr.press/v164/wilde22a.html.

Related Material