Budgeted Bandit Problems with Continuous Random Costs

[edit]

Yingce Xia, Wenkui Ding, Xu-Dong Zhang, Nenghai Yu, Tao Qin ;
Asian Conference on Machine Learning, PMLR 45:317-332, 2016.

Abstract

We study the budgeted bandit problem, where each arm is associated with both a reward and a cost. In a budgeted bandit problem, the objective is to design an arm pulling algorithm in order to maximize the total reward before the budget runs out. In this work, we study both multi-armed bandits and linear bandits, and focus on the setting with continuous random costs. We propose an upper confidence bound based algorithm for multi-armed bandits and a confidence ball based algorithm for linear bandits, and prove logarithmic regret bounds for both algorithms. We conduct simulations on the proposed algorithms, which verify the effectiveness of our proposed algorithms.

Related Material