Cheap Bandits

Manjesh Hanawal, Venkatesh Saligrama, Michal Valko, Remi Munos
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2133-2142, 2015.

Abstract

We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications, it is actually cheaper to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is smooth over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while minimizing the sensing cost. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis, we establish a Ω(\sqrt(dT)) lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension d.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-hanawal15, title = {Cheap Bandits}, author = {Manjesh Hanawal and Venkatesh Saligrama and Michal Valko and Remi Munos}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {2133--2142}, year = {2015}, editor = {Francis Bach and David Blei}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/hanawal15.pdf}, url = { http://proceedings.mlr.press/v37/hanawal15.html }, abstract = {We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications, it is actually cheaper to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is smooth over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while minimizing the sensing cost. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis, we establish a Ω(\sqrt(dT)) lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension d.} }
Endnote
%0 Conference Paper %T Cheap Bandits %A Manjesh Hanawal %A Venkatesh Saligrama %A Michal Valko %A Remi Munos %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-hanawal15 %I PMLR %P 2133--2142 %U http://proceedings.mlr.press/v37/hanawal15.html %V 37 %X We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications, it is actually cheaper to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is smooth over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while minimizing the sensing cost. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis, we establish a Ω(\sqrt(dT)) lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension d.
RIS
TY - CPAPER TI - Cheap Bandits AU - Manjesh Hanawal AU - Venkatesh Saligrama AU - Michal Valko AU - Remi Munos BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-hanawal15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 2133 EP - 2142 L1 - http://proceedings.mlr.press/v37/hanawal15.pdf UR - http://proceedings.mlr.press/v37/hanawal15.html AB - We consider stochastic sequential learning problems where the learner can observe the average reward of several actions. Such a setting is interesting in many applications involving monitoring and surveillance, where the set of the actions to observe represent some (geographical) area. The importance of this setting is that in these applications, it is actually cheaper to observe average reward of a group of actions rather than the reward of a single action. We show that when the reward is smooth over a given graph representing the neighboring actions, we can maximize the cumulative reward of learning while minimizing the sensing cost. In this paper we propose CheapUCB, an algorithm that matches the regret guarantees of the known algorithms for this setting and at the same time guarantees a linear cost again over them. As a by-product of our analysis, we establish a Ω(\sqrt(dT)) lower bound on the cumulative regret of spectral bandits for a class of graphs with effective dimension d. ER -
APA
Hanawal, M., Saligrama, V., Valko, M. & Munos, R.. (2015). Cheap Bandits. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:2133-2142 Available from http://proceedings.mlr.press/v37/hanawal15.html .

Related Material