Improved Path-length Regret Bounds for Bandits

Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu Wei
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:508-528, 2019.

Abstract

We study adaptive regret bounds in terms of the variation of the losses (the so-called path-length bounds) for both multi-armed bandit and more generally linear bandit. We first show that the seemingly suboptimal path-length bound of (Wei and Luo, 2018) is in fact not improvable for adaptive adversary. Despite this negative result, we then develop two new algorithms, one that strictly improves over (Wei and Luo, 2018) with a smaller path-length measure, and the other which improves over (Wei and Luo, 2018) for oblivious adversary when the path-length is large. Our algorithms are based on the well-studied optimistic mirror descent framework, but importantly with several novel techniques, including new optimistic predictions, a slight bias towards recently selected arms, and the use of a hybrid regularizer similar to that of (Bubeck et al., 2018). Furthermore, we extend our results to linear bandit by showing a reduction to obtaining dynamic regret for a full-information problem, followed by a further reduction to convex body chasing. As a consequence we obtain new dynamic regret results as well as the first path-length regret bounds for general linear bandit.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-bubeck19b, title = {Improved Path-length Regret Bounds for Bandits}, author = {Bubeck, S{\'e}bastien and Li, Yuanzhi and Luo, Haipeng and Wei, Chen-Yu}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {508--528}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/bubeck19b/bubeck19b.pdf}, url = {https://proceedings.mlr.press/v99/bubeck19b.html}, abstract = {We study adaptive regret bounds in terms of the variation of the losses (the so-called path-length bounds) for both multi-armed bandit and more generally linear bandit. We first show that the seemingly suboptimal path-length bound of (Wei and Luo, 2018) is in fact not improvable for adaptive adversary. Despite this negative result, we then develop two new algorithms, one that strictly improves over (Wei and Luo, 2018) with a smaller path-length measure, and the other which improves over (Wei and Luo, 2018) for oblivious adversary when the path-length is large. Our algorithms are based on the well-studied optimistic mirror descent framework, but importantly with several novel techniques, including new optimistic predictions, a slight bias towards recently selected arms, and the use of a hybrid regularizer similar to that of (Bubeck et al., 2018). Furthermore, we extend our results to linear bandit by showing a reduction to obtaining dynamic regret for a full-information problem, followed by a further reduction to convex body chasing. As a consequence we obtain new dynamic regret results as well as the first path-length regret bounds for general linear bandit.} }
Endnote
%0 Conference Paper %T Improved Path-length Regret Bounds for Bandits %A Sébastien Bubeck %A Yuanzhi Li %A Haipeng Luo %A Chen-Yu Wei %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-bubeck19b %I PMLR %P 508--528 %U https://proceedings.mlr.press/v99/bubeck19b.html %V 99 %X We study adaptive regret bounds in terms of the variation of the losses (the so-called path-length bounds) for both multi-armed bandit and more generally linear bandit. We first show that the seemingly suboptimal path-length bound of (Wei and Luo, 2018) is in fact not improvable for adaptive adversary. Despite this negative result, we then develop two new algorithms, one that strictly improves over (Wei and Luo, 2018) with a smaller path-length measure, and the other which improves over (Wei and Luo, 2018) for oblivious adversary when the path-length is large. Our algorithms are based on the well-studied optimistic mirror descent framework, but importantly with several novel techniques, including new optimistic predictions, a slight bias towards recently selected arms, and the use of a hybrid regularizer similar to that of (Bubeck et al., 2018). Furthermore, we extend our results to linear bandit by showing a reduction to obtaining dynamic regret for a full-information problem, followed by a further reduction to convex body chasing. As a consequence we obtain new dynamic regret results as well as the first path-length regret bounds for general linear bandit.
APA
Bubeck, S., Li, Y., Luo, H. & Wei, C.. (2019). Improved Path-length Regret Bounds for Bandits. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:508-528 Available from https://proceedings.mlr.press/v99/bubeck19b.html.

Related Material