Fiduciary Bandits

Gal Bahar, Omer Ben-Porat, Kevin Leyton-Brown, Moshe Tennenholtz
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:518-527, 2020.

Abstract

Recommendation systems often face exploration-exploitation tradeoffs: the system can only learn about the desirability of new options by recommending them to some user. Such systems can thus be modeled as multi-armed bandit settings; however, users are self-interested and cannot be made to follow recommendations. We ask whether exploration can nevertheless be performed in a way that scrupulously respects agents’ interests—i.e., by a system that acts as a fiduciary. More formally, we introduce a model in which a recommendation system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-bahar20a, title = {Fiduciary Bandits}, author = {Bahar, Gal and Ben-Porat, Omer and Leyton-Brown, Kevin and Tennenholtz, Moshe}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {518--527}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/bahar20a/bahar20a.pdf}, url = {http://proceedings.mlr.press/v119/bahar20a.html}, abstract = {Recommendation systems often face exploration-exploitation tradeoffs: the system can only learn about the desirability of new options by recommending them to some user. Such systems can thus be modeled as multi-armed bandit settings; however, users are self-interested and cannot be made to follow recommendations. We ask whether exploration can nevertheless be performed in a way that scrupulously respects agents’ interests—i.e., by a system that acts as a fiduciary. More formally, we introduce a model in which a recommendation system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm.} }
Endnote
%0 Conference Paper %T Fiduciary Bandits %A Gal Bahar %A Omer Ben-Porat %A Kevin Leyton-Brown %A Moshe Tennenholtz %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-bahar20a %I PMLR %P 518--527 %U http://proceedings.mlr.press/v119/bahar20a.html %V 119 %X Recommendation systems often face exploration-exploitation tradeoffs: the system can only learn about the desirability of new options by recommending them to some user. Such systems can thus be modeled as multi-armed bandit settings; however, users are self-interested and cannot be made to follow recommendations. We ask whether exploration can nevertheless be performed in a way that scrupulously respects agents’ interests—i.e., by a system that acts as a fiduciary. More formally, we introduce a model in which a recommendation system faces an exploration-exploitation tradeoff under the constraint that it can never recommend any action that it knows yields lower reward in expectation than an agent would achieve if it acted alone. Our main contribution is a positive result: an asymptotically optimal, incentive compatible, and ex-ante individually rational recommendation algorithm.
APA
Bahar, G., Ben-Porat, O., Leyton-Brown, K. & Tennenholtz, M.. (2020). Fiduciary Bandits. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:518-527 Available from http://proceedings.mlr.press/v119/bahar20a.html.

Related Material