Multiarmed Bandit Problems with Strategic Arms
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:383416, 2019.
Abstract
We study a strategic version of the multiarmed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward $v_a$ and can choose an amount $x_a$ to pass on to the principal (keeping $v_ax_a$ for itself). All nonpulled arms get reward $0$. Each strategic arm tries to maximize its own utility over the course of $T$ rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round ($v_a^t \leftarrow D_a$), we show that: \begin{itemize} \item Algorithms that perform well in the classic adversarial multiarmed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions $D_1,\ldots,D_k$ and an $o(T)$approximate Nash equilibrium for the arms where the principal receives reward $o(T)$. \item There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every $o(T)$approximate Nash equilibrium, the principal receives expected reward $\mu’T  o(T)$, where $\mu’$ is the secondlargest of the means $\mathbb{E}[D_{a}]$. This algorithm maintains its guarantee if the arms are nonstrategic ($x_a = v_a$), and also if there is a mix of strategic and nonstrategic arms. \end{itemize}
Related Material


