What You See May Not Be What You Get: UCB Bandit Algorithms Robust to $\varepsilon$-Contamination

Laura Niss, Ambuj Tewari
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:450-459, 2020.

Abstract

Motivated by applications of bandit algorithms in education, we consider a stochastic multi-armed bandit problem with $\varepsilon$-contaminated rewards. We allow an adversary to arbitrarily give unbounded contaminated rewards with full knowledge of the past and future. We impose only the constraint that at any time t the proportion of contaminated rewards for any actions is less than or equal to $\varepsilon$. We derive concentration inequalities for two robust mean estimators for sub-Gaussian distributions in the $\varepsilon$-contamination context. We define the $\varepsilon$-contaminated stochastic bandit problem and use our robust mean estimators to give two variants of a robust Upper Confidence Bound (UCB) algorithm, crUCB. Using regret derived from only the underlying stochastic rewards, both variants of crUCB achieve $O(\sqrt{KTlogT}$. Our simulations are designed to reflect reasonable settings a teacher would experience when implementing a bandit algorithm. We show that in certain adversarial regimes crUCB not only outperforms algorithms designed for stochastic (UCB1) and adversarial bandits (EXP3) but also those that have “best of both worlds” guarantees (EXP3++ and Tsallis-Inf) even when our constrainton the proportion of contaminated rewards is broken.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-niss20a, title = { What You See May Not Be What You Get: UCB Bandit Algorithms Robust to $\varepsilon$-Contamination}, author = {Niss, Laura and Tewari, Ambuj}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {450--459}, year = {2020}, editor = {Peters, Jonas and Sontag, David}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/niss20a/niss20a.pdf}, url = {https://proceedings.mlr.press/v124/niss20a.html}, abstract = {Motivated by applications of bandit algorithms in education, we consider a stochastic multi-armed bandit problem with $\varepsilon$-contaminated rewards. We allow an adversary to arbitrarily give unbounded contaminated rewards with full knowledge of the past and future. We impose only the constraint that at any time t the proportion of contaminated rewards for any actions is less than or equal to $\varepsilon$. We derive concentration inequalities for two robust mean estimators for sub-Gaussian distributions in the $\varepsilon$-contamination context. We define the $\varepsilon$-contaminated stochastic bandit problem and use our robust mean estimators to give two variants of a robust Upper Confidence Bound (UCB) algorithm, crUCB. Using regret derived from only the underlying stochastic rewards, both variants of crUCB achieve $O(\sqrt{KTlogT}$. Our simulations are designed to reflect reasonable settings a teacher would experience when implementing a bandit algorithm. We show that in certain adversarial regimes crUCB not only outperforms algorithms designed for stochastic (UCB1) and adversarial bandits (EXP3) but also those that have “best of both worlds” guarantees (EXP3++ and Tsallis-Inf) even when our constrainton the proportion of contaminated rewards is broken.} }
Endnote
%0 Conference Paper %T What You See May Not Be What You Get: UCB Bandit Algorithms Robust to $\varepsilon$-Contamination %A Laura Niss %A Ambuj Tewari %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-niss20a %I PMLR %P 450--459 %U https://proceedings.mlr.press/v124/niss20a.html %V 124 %X Motivated by applications of bandit algorithms in education, we consider a stochastic multi-armed bandit problem with $\varepsilon$-contaminated rewards. We allow an adversary to arbitrarily give unbounded contaminated rewards with full knowledge of the past and future. We impose only the constraint that at any time t the proportion of contaminated rewards for any actions is less than or equal to $\varepsilon$. We derive concentration inequalities for two robust mean estimators for sub-Gaussian distributions in the $\varepsilon$-contamination context. We define the $\varepsilon$-contaminated stochastic bandit problem and use our robust mean estimators to give two variants of a robust Upper Confidence Bound (UCB) algorithm, crUCB. Using regret derived from only the underlying stochastic rewards, both variants of crUCB achieve $O(\sqrt{KTlogT}$. Our simulations are designed to reflect reasonable settings a teacher would experience when implementing a bandit algorithm. We show that in certain adversarial regimes crUCB not only outperforms algorithms designed for stochastic (UCB1) and adversarial bandits (EXP3) but also those that have “best of both worlds” guarantees (EXP3++ and Tsallis-Inf) even when our constrainton the proportion of contaminated rewards is broken.
APA
Niss, L. & Tewari, A.. (2020). What You See May Not Be What You Get: UCB Bandit Algorithms Robust to $\varepsilon$-Contamination. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:450-459 Available from https://proceedings.mlr.press/v124/niss20a.html.

Related Material