Learning Multimodal Rewards from Rankings

Vivek Myers, Erdem Biyik, Nima Anari, Dorsa Sadigh
Proceedings of the 5th Conference on Robot Learning, PMLR 164:342-352, 2022.

Abstract

Learning from human feedback has shown to be a useful approach in acquiring robot reward functions. However, expert feedback is often assumed to be drawn from an underlying unimodal reward function. This assumption does not always hold including in settings where multiple experts provide data or when a single expert provides data for different tasks—we thus go beyond learning a unimodal reward and focus on learning a multimodal reward function. We formulate the multimodal reward learning as a mixture learning problem and develop a novel ranking-based learning approach, where the experts are only required to rank a given set of trajectories. Furthermore, as access to interaction data is often expensive in robotics, we develop an active querying approach to accelerate the learning process. We conduct experiments and user studies using a multi-task variant of OpenAI’s LunarLander and a real Fetch robot, where we collect data from multiple users with different preferences. The results suggest that our approach can efficiently learn multimodal reward functions, and improve data-efficiency over benchmark methods that we adapt to our learning problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-myers22a, title = {Learning Multimodal Rewards from Rankings}, author = {Myers, Vivek and Biyik, Erdem and Anari, Nima and Sadigh, Dorsa}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {342--352}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/myers22a/myers22a.pdf}, url = {https://proceedings.mlr.press/v164/myers22a.html}, abstract = {Learning from human feedback has shown to be a useful approach in acquiring robot reward functions. However, expert feedback is often assumed to be drawn from an underlying unimodal reward function. This assumption does not always hold including in settings where multiple experts provide data or when a single expert provides data for different tasks—we thus go beyond learning a unimodal reward and focus on learning a multimodal reward function. We formulate the multimodal reward learning as a mixture learning problem and develop a novel ranking-based learning approach, where the experts are only required to rank a given set of trajectories. Furthermore, as access to interaction data is often expensive in robotics, we develop an active querying approach to accelerate the learning process. We conduct experiments and user studies using a multi-task variant of OpenAI’s LunarLander and a real Fetch robot, where we collect data from multiple users with different preferences. The results suggest that our approach can efficiently learn multimodal reward functions, and improve data-efficiency over benchmark methods that we adapt to our learning problem.} }
Endnote
%0 Conference Paper %T Learning Multimodal Rewards from Rankings %A Vivek Myers %A Erdem Biyik %A Nima Anari %A Dorsa Sadigh %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-myers22a %I PMLR %P 342--352 %U https://proceedings.mlr.press/v164/myers22a.html %V 164 %X Learning from human feedback has shown to be a useful approach in acquiring robot reward functions. However, expert feedback is often assumed to be drawn from an underlying unimodal reward function. This assumption does not always hold including in settings where multiple experts provide data or when a single expert provides data for different tasks—we thus go beyond learning a unimodal reward and focus on learning a multimodal reward function. We formulate the multimodal reward learning as a mixture learning problem and develop a novel ranking-based learning approach, where the experts are only required to rank a given set of trajectories. Furthermore, as access to interaction data is often expensive in robotics, we develop an active querying approach to accelerate the learning process. We conduct experiments and user studies using a multi-task variant of OpenAI’s LunarLander and a real Fetch robot, where we collect data from multiple users with different preferences. The results suggest that our approach can efficiently learn multimodal reward functions, and improve data-efficiency over benchmark methods that we adapt to our learning problem.
APA
Myers, V., Biyik, E., Anari, N. & Sadigh, D.. (2022). Learning Multimodal Rewards from Rankings. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:342-352 Available from https://proceedings.mlr.press/v164/myers22a.html.

Related Material