Non-Stationary Delayed Bandits with Intermediate Observations

Claire Vernade, Andras Gyorgy, Timothy Mann
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9722-9732, 2020.

Abstract

Online recommender systems often face long delays in receiving feedback, especially when optimizing for some long-term metrics. While mitigating the effects of delays in learning is well-understood in stationary environments, the problem becomes much more challenging when the environment changes. In fact, if the timescale of the change is comparable to the delay, it is impossible to learn about the environment, since the available observations are already obsolete. However, the arising issues can be addressed if intermediate signals are available without delay, such that given those signals, the long-term behavior of the system is stationary. To model this situation, we introduce the problem of stochastic, non-stationary, delayed bandits with intermediate observations. We develop a computationally efficient algorithm based on UCRL, and prove sublinear regret guarantees for its performance. Experimental results demonstrate that our method is able to learn in non-stationary delayed environments where existing methods fail.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-vernade20b, title = {Non-Stationary Delayed Bandits with Intermediate Observations}, author = {Vernade, Claire and Gyorgy, Andras and Mann, Timothy}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9722--9732}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/vernade20b/vernade20b.pdf}, url = {https://proceedings.mlr.press/v119/vernade20b.html}, abstract = {Online recommender systems often face long delays in receiving feedback, especially when optimizing for some long-term metrics. While mitigating the effects of delays in learning is well-understood in stationary environments, the problem becomes much more challenging when the environment changes. In fact, if the timescale of the change is comparable to the delay, it is impossible to learn about the environment, since the available observations are already obsolete. However, the arising issues can be addressed if intermediate signals are available without delay, such that given those signals, the long-term behavior of the system is stationary. To model this situation, we introduce the problem of stochastic, non-stationary, delayed bandits with intermediate observations. We develop a computationally efficient algorithm based on UCRL, and prove sublinear regret guarantees for its performance. Experimental results demonstrate that our method is able to learn in non-stationary delayed environments where existing methods fail.} }
Endnote
%0 Conference Paper %T Non-Stationary Delayed Bandits with Intermediate Observations %A Claire Vernade %A Andras Gyorgy %A Timothy Mann %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-vernade20b %I PMLR %P 9722--9732 %U https://proceedings.mlr.press/v119/vernade20b.html %V 119 %X Online recommender systems often face long delays in receiving feedback, especially when optimizing for some long-term metrics. While mitigating the effects of delays in learning is well-understood in stationary environments, the problem becomes much more challenging when the environment changes. In fact, if the timescale of the change is comparable to the delay, it is impossible to learn about the environment, since the available observations are already obsolete. However, the arising issues can be addressed if intermediate signals are available without delay, such that given those signals, the long-term behavior of the system is stationary. To model this situation, we introduce the problem of stochastic, non-stationary, delayed bandits with intermediate observations. We develop a computationally efficient algorithm based on UCRL, and prove sublinear regret guarantees for its performance. Experimental results demonstrate that our method is able to learn in non-stationary delayed environments where existing methods fail.
APA
Vernade, C., Gyorgy, A. & Mann, T.. (2020). Non-Stationary Delayed Bandits with Intermediate Observations. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9722-9732 Available from https://proceedings.mlr.press/v119/vernade20b.html.

Related Material