Incentivized Learning in Principal-Agent Bandit Games

Antoine Scheid, Daniil Tiapkin, Etienne Boursier, Aymeric Capitaine, Eric Moulines, Michael Jordan, El-Mahdi El-Mhamdi, Alain Oliviero Durmus
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43608-43631, 2024.

Abstract

This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent. The principal and the agent have misaligned objectives and the choice of action is only left to the agent. However, the principal can influence the agent’s decisions by offering incentives which add up to his rewards. The principal aims to iteratively learn an incentive policy to maximize her own total utility. This framework extends usual bandit problems and is motivated by several practical applications, such as healthcare or ecological taxation, where traditionally used mechanism design theories often overlook the learning aspect of the problem. We present nearly optimal (with respect to a horizon $T$) learning algorithms for the principal’s regret in both multi-armed and linear contextual settings. Finally, we support our theoretical guarantees through numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-scheid24a, title = {Incentivized Learning in Principal-Agent Bandit Games}, author = {Scheid, Antoine and Tiapkin, Daniil and Boursier, Etienne and Capitaine, Aymeric and Moulines, Eric and Jordan, Michael and El-Mhamdi, El-Mahdi and Oliviero Durmus, Alain}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43608--43631}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/scheid24a/scheid24a.pdf}, url = {https://proceedings.mlr.press/v235/scheid24a.html}, abstract = {This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent. The principal and the agent have misaligned objectives and the choice of action is only left to the agent. However, the principal can influence the agent’s decisions by offering incentives which add up to his rewards. The principal aims to iteratively learn an incentive policy to maximize her own total utility. This framework extends usual bandit problems and is motivated by several practical applications, such as healthcare or ecological taxation, where traditionally used mechanism design theories often overlook the learning aspect of the problem. We present nearly optimal (with respect to a horizon $T$) learning algorithms for the principal’s regret in both multi-armed and linear contextual settings. Finally, we support our theoretical guarantees through numerical experiments.} }
Endnote
%0 Conference Paper %T Incentivized Learning in Principal-Agent Bandit Games %A Antoine Scheid %A Daniil Tiapkin %A Etienne Boursier %A Aymeric Capitaine %A Eric Moulines %A Michael Jordan %A El-Mahdi El-Mhamdi %A Alain Oliviero Durmus %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-scheid24a %I PMLR %P 43608--43631 %U https://proceedings.mlr.press/v235/scheid24a.html %V 235 %X This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent. The principal and the agent have misaligned objectives and the choice of action is only left to the agent. However, the principal can influence the agent’s decisions by offering incentives which add up to his rewards. The principal aims to iteratively learn an incentive policy to maximize her own total utility. This framework extends usual bandit problems and is motivated by several practical applications, such as healthcare or ecological taxation, where traditionally used mechanism design theories often overlook the learning aspect of the problem. We present nearly optimal (with respect to a horizon $T$) learning algorithms for the principal’s regret in both multi-armed and linear contextual settings. Finally, we support our theoretical guarantees through numerical experiments.
APA
Scheid, A., Tiapkin, D., Boursier, E., Capitaine, A., Moulines, E., Jordan, M., El-Mhamdi, E. & Oliviero Durmus, A.. (2024). Incentivized Learning in Principal-Agent Bandit Games. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43608-43631 Available from https://proceedings.mlr.press/v235/scheid24a.html.

Related Material