Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm

Fang Kong, Canzhe Zhao, Shuai Li
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:657-673, 2023.

Abstract

The linear bandit problem has been studied for many years in both stochastic and adversarial settings. Designing an algorithm that can optimize the environment without knowing the loss type attracts lots of interest. \citet{LeeLWZ021} propose an algorithm that actively detects the loss type and then switches between different algorithms specially designed for specific settings. However, such an approach requires meticulous designs to perform well in all environments. Follow-the-regularized-leader (FTRL) is another type of popular algorithm that can adapt to different environments. This algorithm is of simple design and the regret bounds are shown to be optimal in traditional multi-armed bandit problems compared with the detect-switch type. Designing an FTRL-type algorithm for linear bandits is an important question that has been open for a long time. In this paper, we prove that the FTRL algorithm with a negative entropy regularizer can achieve the best-of-three-world results for the linear bandit problem. Our regret bounds achieve the same or nearly the same order as the previous detect-switch type algorithm but with a much simpler algorithmic design.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-kong23a, title = {Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm}, author = {Kong, Fang and Zhao, Canzhe and Li, Shuai}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {657--673}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/kong23a/kong23a.pdf}, url = {https://proceedings.mlr.press/v195/kong23a.html}, abstract = {The linear bandit problem has been studied for many years in both stochastic and adversarial settings. Designing an algorithm that can optimize the environment without knowing the loss type attracts lots of interest. \citet{LeeLWZ021} propose an algorithm that actively detects the loss type and then switches between different algorithms specially designed for specific settings. However, such an approach requires meticulous designs to perform well in all environments. Follow-the-regularized-leader (FTRL) is another type of popular algorithm that can adapt to different environments. This algorithm is of simple design and the regret bounds are shown to be optimal in traditional multi-armed bandit problems compared with the detect-switch type. Designing an FTRL-type algorithm for linear bandits is an important question that has been open for a long time. In this paper, we prove that the FTRL algorithm with a negative entropy regularizer can achieve the best-of-three-world results for the linear bandit problem. Our regret bounds achieve the same or nearly the same order as the previous detect-switch type algorithm but with a much simpler algorithmic design. } }
Endnote
%0 Conference Paper %T Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm %A Fang Kong %A Canzhe Zhao %A Shuai Li %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-kong23a %I PMLR %P 657--673 %U https://proceedings.mlr.press/v195/kong23a.html %V 195 %X The linear bandit problem has been studied for many years in both stochastic and adversarial settings. Designing an algorithm that can optimize the environment without knowing the loss type attracts lots of interest. \citet{LeeLWZ021} propose an algorithm that actively detects the loss type and then switches between different algorithms specially designed for specific settings. However, such an approach requires meticulous designs to perform well in all environments. Follow-the-regularized-leader (FTRL) is another type of popular algorithm that can adapt to different environments. This algorithm is of simple design and the regret bounds are shown to be optimal in traditional multi-armed bandit problems compared with the detect-switch type. Designing an FTRL-type algorithm for linear bandits is an important question that has been open for a long time. In this paper, we prove that the FTRL algorithm with a negative entropy regularizer can achieve the best-of-three-world results for the linear bandit problem. Our regret bounds achieve the same or nearly the same order as the previous detect-switch type algorithm but with a much simpler algorithmic design.
APA
Kong, F., Zhao, C. & Li, S.. (2023). Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:657-673 Available from https://proceedings.mlr.press/v195/kong23a.html.

Related Material