Bandits with Delayed, Aggregated Anonymous Feedback

Ciara Pike-Burke, Shipra Agrawal, Csaba Szepesvari, Steffen Grunewalder
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4105-4113, 2018.

Abstract

We study a variant of the stochastic $K$-armed bandit problem, which we call "bandits with delayed, aggregated anonymous feedback”. In this problem, when the player pulls an arm, a reward is generated, however it is not immediately observed. Instead, at the end of each round the player observes only the sum of a number of previously generated rewards which happen to arrive in the given round. The rewards are stochastically delayed and due to the aggregated nature of the observations, the information of which arm led to a particular reward is lost. The question is what is the cost of the information loss due to this delayed, aggregated anonymous feedback? Previous works have studied bandits with stochastic, non-anonymous delays and found that the regret increases only by an additive factor relating to the expected delay. In this paper, we show that this additive regret increase can be maintained in the harder delayed, aggregated anonymous feedback setting when the expected delay (or a bound on it) is known. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors or an additive variance term for unbounded delays.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-pike-burke18a, title = {Bandits with Delayed, Aggregated Anonymous Feedback}, author = {Pike-Burke, Ciara and Agrawal, Shipra and Szepesvari, Csaba and Grunewalder, Steffen}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4105--4113}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/pike-burke18a/pike-burke18a.pdf}, url = {https://proceedings.mlr.press/v80/pike-burke18a.html}, abstract = {We study a variant of the stochastic $K$-armed bandit problem, which we call "bandits with delayed, aggregated anonymous feedback”. In this problem, when the player pulls an arm, a reward is generated, however it is not immediately observed. Instead, at the end of each round the player observes only the sum of a number of previously generated rewards which happen to arrive in the given round. The rewards are stochastically delayed and due to the aggregated nature of the observations, the information of which arm led to a particular reward is lost. The question is what is the cost of the information loss due to this delayed, aggregated anonymous feedback? Previous works have studied bandits with stochastic, non-anonymous delays and found that the regret increases only by an additive factor relating to the expected delay. In this paper, we show that this additive regret increase can be maintained in the harder delayed, aggregated anonymous feedback setting when the expected delay (or a bound on it) is known. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors or an additive variance term for unbounded delays.} }
Endnote
%0 Conference Paper %T Bandits with Delayed, Aggregated Anonymous Feedback %A Ciara Pike-Burke %A Shipra Agrawal %A Csaba Szepesvari %A Steffen Grunewalder %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-pike-burke18a %I PMLR %P 4105--4113 %U https://proceedings.mlr.press/v80/pike-burke18a.html %V 80 %X We study a variant of the stochastic $K$-armed bandit problem, which we call "bandits with delayed, aggregated anonymous feedback”. In this problem, when the player pulls an arm, a reward is generated, however it is not immediately observed. Instead, at the end of each round the player observes only the sum of a number of previously generated rewards which happen to arrive in the given round. The rewards are stochastically delayed and due to the aggregated nature of the observations, the information of which arm led to a particular reward is lost. The question is what is the cost of the information loss due to this delayed, aggregated anonymous feedback? Previous works have studied bandits with stochastic, non-anonymous delays and found that the regret increases only by an additive factor relating to the expected delay. In this paper, we show that this additive regret increase can be maintained in the harder delayed, aggregated anonymous feedback setting when the expected delay (or a bound on it) is known. We provide an algorithm that matches the worst case regret of the non-anonymous problem exactly when the delays are bounded, and up to logarithmic factors or an additive variance term for unbounded delays.
APA
Pike-Burke, C., Agrawal, S., Szepesvari, C. & Grunewalder, S.. (2018). Bandits with Delayed, Aggregated Anonymous Feedback. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4105-4113 Available from https://proceedings.mlr.press/v80/pike-burke18a.html.

Related Material