Multi-armed Bandit Problems with Strategic Arms

Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:383-416, 2019.

Abstract

We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward $v_a$ and can choose an amount $x_a$ to pass on to the principal (keeping $v_a-x_a$ for itself). All non-pulled arms get reward $0$. Each strategic arm tries to maximize its own utility over the course of $T$ rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round ($v_a^t \leftarrow D_a$), we show that: \begin{itemize} \item Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$ and an $o(T)$-approximate Nash equilibrium for the arms where the principal receives reward $o(T)$. \item There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every $o(T)$-approximate Nash equilibrium, the principal receives expected reward $\mu’T - o(T)$, where $\mu’$ is the second-largest of the means $\mathbb{E}[D_{a}]$. This algorithm maintains its guarantee if the arms are non-strategic ($x_a = v_a$), and also if there is a mix of strategic and non-strategic arms. \end{itemize}

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-braverman19b, title = {Multi-armed Bandit Problems with Strategic Arms}, author = {Braverman, Mark and Mao, Jieming and Schneider, Jon and Weinberg, S. Matthew}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {383--416}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/braverman19b/braverman19b.pdf}, url = {https://proceedings.mlr.press/v99/braverman19b.html}, abstract = { We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward $v_a$ and can choose an amount $x_a$ to pass on to the principal (keeping $v_a-x_a$ for itself). All non-pulled arms get reward $0$. Each strategic arm tries to maximize its own utility over the course of $T$ rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round ($v_a^t \leftarrow D_a$), we show that: \begin{itemize} \item Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$ and an $o(T)$-approximate Nash equilibrium for the arms where the principal receives reward $o(T)$. \item There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every $o(T)$-approximate Nash equilibrium, the principal receives expected reward $\mu’T - o(T)$, where $\mu’$ is the second-largest of the means $\mathbb{E}[D_{a}]$. This algorithm maintains its guarantee if the arms are non-strategic ($x_a = v_a$), and also if there is a mix of strategic and non-strategic arms. \end{itemize}} }
Endnote
%0 Conference Paper %T Multi-armed Bandit Problems with Strategic Arms %A Mark Braverman %A Jieming Mao %A Jon Schneider %A S. Matthew Weinberg %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-braverman19b %I PMLR %P 383--416 %U https://proceedings.mlr.press/v99/braverman19b.html %V 99 %X We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward $v_a$ and can choose an amount $x_a$ to pass on to the principal (keeping $v_a-x_a$ for itself). All non-pulled arms get reward $0$. Each strategic arm tries to maximize its own utility over the course of $T$ rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round ($v_a^t \leftarrow D_a$), we show that: \begin{itemize} \item Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$ and an $o(T)$-approximate Nash equilibrium for the arms where the principal receives reward $o(T)$. \item There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every $o(T)$-approximate Nash equilibrium, the principal receives expected reward $\mu’T - o(T)$, where $\mu’$ is the second-largest of the means $\mathbb{E}[D_{a}]$. This algorithm maintains its guarantee if the arms are non-strategic ($x_a = v_a$), and also if there is a mix of strategic and non-strategic arms. \end{itemize}
APA
Braverman, M., Mao, J., Schneider, J. & Weinberg, S.M.. (2019). Multi-armed Bandit Problems with Strategic Arms. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:383-416 Available from https://proceedings.mlr.press/v99/braverman19b.html.

Related Material