Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input

Andi Peng, Yuying Sun, Tianmin Shu, David Abel
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:40258-40270, 2024.

Abstract

Humans use context to specify preferences over behaviors, i.e. their reward functions. Yet, algorithms for inferring reward models from preference data do not take this social learning view into account. Inspired by pragmatic human communication, we study how to extract fine-grained data regarding why an example is preferred that is useful for learning an accurate reward model. We propose to enrich preference queries to ask both (1) which features of a given example are preferable in addition to (2) comparisons between objects. We derive an approach for learning from these feature-level preferences, both for cases where users specify which features are reward-relevant, and when users do not. We evaluate our approach on linear bandit settings in both visual and language-based domains. Results support the efficiency of our approach in quickly converging to accurate rewards with less comparisons vs. example-only labels. Finally, we validate the real-world applicability with a behavioral experiment on a mushroom foraging task. Our findings suggest that incorporating pragmatic feature preferences is a promising approach for more efficient user-aligned reward learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-peng24d, title = {Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input}, author = {Peng, Andi and Sun, Yuying and Shu, Tianmin and Abel, David}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {40258--40270}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24d/peng24d.pdf}, url = {https://proceedings.mlr.press/v235/peng24d.html}, abstract = {Humans use context to specify preferences over behaviors, i.e. their reward functions. Yet, algorithms for inferring reward models from preference data do not take this social learning view into account. Inspired by pragmatic human communication, we study how to extract fine-grained data regarding why an example is preferred that is useful for learning an accurate reward model. We propose to enrich preference queries to ask both (1) which features of a given example are preferable in addition to (2) comparisons between objects. We derive an approach for learning from these feature-level preferences, both for cases where users specify which features are reward-relevant, and when users do not. We evaluate our approach on linear bandit settings in both visual and language-based domains. Results support the efficiency of our approach in quickly converging to accurate rewards with less comparisons vs. example-only labels. Finally, we validate the real-world applicability with a behavioral experiment on a mushroom foraging task. Our findings suggest that incorporating pragmatic feature preferences is a promising approach for more efficient user-aligned reward learning.} }
Endnote
%0 Conference Paper %T Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input %A Andi Peng %A Yuying Sun %A Tianmin Shu %A David Abel %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-peng24d %I PMLR %P 40258--40270 %U https://proceedings.mlr.press/v235/peng24d.html %V 235 %X Humans use context to specify preferences over behaviors, i.e. their reward functions. Yet, algorithms for inferring reward models from preference data do not take this social learning view into account. Inspired by pragmatic human communication, we study how to extract fine-grained data regarding why an example is preferred that is useful for learning an accurate reward model. We propose to enrich preference queries to ask both (1) which features of a given example are preferable in addition to (2) comparisons between objects. We derive an approach for learning from these feature-level preferences, both for cases where users specify which features are reward-relevant, and when users do not. We evaluate our approach on linear bandit settings in both visual and language-based domains. Results support the efficiency of our approach in quickly converging to accurate rewards with less comparisons vs. example-only labels. Finally, we validate the real-world applicability with a behavioral experiment on a mushroom foraging task. Our findings suggest that incorporating pragmatic feature preferences is a promising approach for more efficient user-aligned reward learning.
APA
Peng, A., Sun, Y., Shu, T. & Abel, D.. (2024). Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Input. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:40258-40270 Available from https://proceedings.mlr.press/v235/peng24d.html.

Related Material