Multi-Armed Bandits with Cost Subsidy

Deeksha Sinha, Karthik Abinav Sankararaman, Abbas Kazerouni, Vashist Avadhanula
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3016-3024, 2021.

Abstract

In this paper, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards. We present two applications, intelligent SMS routing problem and ad audience optimization problem faced by several businesses (especially online platforms), and show how our problem uniquely captures key features of these applications. We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for this problem. We then establish a fundamental lower bound on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem. We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform extensive numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-sinha21a, title = { Multi-Armed Bandits with Cost Subsidy }, author = {Sinha, Deeksha and Abinav Sankararaman, Karthik and Kazerouni, Abbas and Avadhanula, Vashist}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3016--3024}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/sinha21a/sinha21a.pdf}, url = {https://proceedings.mlr.press/v130/sinha21a.html}, abstract = { In this paper, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards. We present two applications, intelligent SMS routing problem and ad audience optimization problem faced by several businesses (especially online platforms), and show how our problem uniquely captures key features of these applications. We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for this problem. We then establish a fundamental lower bound on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem. We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform extensive numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms. } }
Endnote
%0 Conference Paper %T Multi-Armed Bandits with Cost Subsidy %A Deeksha Sinha %A Karthik Abinav Sankararaman %A Abbas Kazerouni %A Vashist Avadhanula %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-sinha21a %I PMLR %P 3016--3024 %U https://proceedings.mlr.press/v130/sinha21a.html %V 130 %X In this paper, we consider a novel variant of the multi-armed bandit (MAB) problem, MAB with cost subsidy, which models many real-life applications where the learning agent has to pay to select an arm and is concerned about optimizing cumulative costs and rewards. We present two applications, intelligent SMS routing problem and ad audience optimization problem faced by several businesses (especially online platforms), and show how our problem uniquely captures key features of these applications. We show that naive generalizations of existing MAB algorithms like Upper Confidence Bound and Thompson Sampling do not perform well for this problem. We then establish a fundamental lower bound on the performance of any online learning algorithm for this problem, highlighting the hardness of our problem in comparison to the classical MAB problem. We also present a simple variant of explore-then-commit and establish near-optimal regret bounds for this algorithm. Lastly, we perform extensive numerical simulations to understand the behavior of a suite of algorithms for various instances and recommend a practical guide to employ different algorithms.
APA
Sinha, D., Abinav Sankararaman, K., Kazerouni, A. & Avadhanula, V.. (2021). Multi-Armed Bandits with Cost Subsidy . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3016-3024 Available from https://proceedings.mlr.press/v130/sinha21a.html.

Related Material