Multi-armed Bandit Algorithm against Strategic Replication

Suho Shin, Seungjoon Lee, Jungseul Ok
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:403-431, 2022.

Abstract

We consider a multi-armed bandit problem in which a set of arms is registered by each agent, and the agent receives reward when its arm is selected. An agent might strategically submit more arms with replications, which can bring more reward by abusing the bandit algorithm’s exploration-exploitation balance. Our analysis reveals that a standard algorithm indeed fails at preventing replication and suffers from linear regret in time $T$. We aim to design a bandit algorithm which demotivates replications and also achieves a small cumulative regret. We devise Hierarchical UCB (H-UCB) of replication-proof, which has $O(\ln T)$-regret under any equilibrium. We further propose Robust Hierarchical UCB (RH-UCB) which has a sublinear regret even in a realistic scenario with irrational agents replicating careless. We verify our theoretical findings through numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-shin22a, title = { Multi-armed Bandit Algorithm against Strategic Replication }, author = {Shin, Suho and Lee, Seungjoon and Ok, Jungseul}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {403--431}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/shin22a/shin22a.pdf}, url = {https://proceedings.mlr.press/v151/shin22a.html}, abstract = { We consider a multi-armed bandit problem in which a set of arms is registered by each agent, and the agent receives reward when its arm is selected. An agent might strategically submit more arms with replications, which can bring more reward by abusing the bandit algorithm’s exploration-exploitation balance. Our analysis reveals that a standard algorithm indeed fails at preventing replication and suffers from linear regret in time $T$. We aim to design a bandit algorithm which demotivates replications and also achieves a small cumulative regret. We devise Hierarchical UCB (H-UCB) of replication-proof, which has $O(\ln T)$-regret under any equilibrium. We further propose Robust Hierarchical UCB (RH-UCB) which has a sublinear regret even in a realistic scenario with irrational agents replicating careless. We verify our theoretical findings through numerical experiments. } }
Endnote
%0 Conference Paper %T Multi-armed Bandit Algorithm against Strategic Replication %A Suho Shin %A Seungjoon Lee %A Jungseul Ok %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-shin22a %I PMLR %P 403--431 %U https://proceedings.mlr.press/v151/shin22a.html %V 151 %X We consider a multi-armed bandit problem in which a set of arms is registered by each agent, and the agent receives reward when its arm is selected. An agent might strategically submit more arms with replications, which can bring more reward by abusing the bandit algorithm’s exploration-exploitation balance. Our analysis reveals that a standard algorithm indeed fails at preventing replication and suffers from linear regret in time $T$. We aim to design a bandit algorithm which demotivates replications and also achieves a small cumulative regret. We devise Hierarchical UCB (H-UCB) of replication-proof, which has $O(\ln T)$-regret under any equilibrium. We further propose Robust Hierarchical UCB (RH-UCB) which has a sublinear regret even in a realistic scenario with irrational agents replicating careless. We verify our theoretical findings through numerical experiments.
APA
Shin, S., Lee, S. & Ok, J.. (2022). Multi-armed Bandit Algorithm against Strategic Replication . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:403-431 Available from https://proceedings.mlr.press/v151/shin22a.html.

Related Material