Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds

Ehsan Emamjomeh-Zadeh, Chen-Yu Wei, Haipeng Luo, David Kempe
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:599-618, 2021.

Abstract

We revisit the problem of online learning with sleeping experts/bandits: in each time step, only a subset of the actions are available for the algorithm to choose from (and learn about). The work of Kleinberg et al. (2010) showed that there exist no-regret algorithms which perform no worse than the best ranking of actions asymptotically. Unfortunately, achieving this regret bound appears computationally hard: Kanade and Steinke (2014) showed that achieving this no-regret performance is at least as hard as PAC-learning DNFs, a notoriously difficult problem. In the present work, we relax the original problem and study computationally efficient no-approximate-regret algorithms: such algorithms may exceed the optimal cost by a multiplicative constant in addition to the additive regret. We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems. For several canonical special cases of the problem, we give algorithms with significantly better approximation ratios; these algorithms also illustrate different techniques for achieving no-approximate-regret guarantees.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-emamjomeh-zadeh21a, title = {Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds}, author = {Emamjomeh-Zadeh, Ehsan and Wei, Chen-Yu and Luo, Haipeng and Kempe, David}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {599--618}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/emamjomeh-zadeh21a/emamjomeh-zadeh21a.pdf}, url = {https://proceedings.mlr.press/v132/emamjomeh-zadeh21a.html}, abstract = {We revisit the problem of online learning with sleeping experts/bandits: in each time step, only a subset of the actions are available for the algorithm to choose from (and learn about). The work of Kleinberg et al. (2010) showed that there exist no-regret algorithms which perform no worse than the best ranking of actions asymptotically. Unfortunately, achieving this regret bound appears computationally hard: Kanade and Steinke (2014) showed that achieving this no-regret performance is at least as hard as PAC-learning DNFs, a notoriously difficult problem. In the present work, we relax the original problem and study computationally efficient no-approximate-regret algorithms: such algorithms may exceed the optimal cost by a multiplicative constant in addition to the additive regret. We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems. For several canonical special cases of the problem, we give algorithms with significantly better approximation ratios; these algorithms also illustrate different techniques for achieving no-approximate-regret guarantees. } }
Endnote
%0 Conference Paper %T Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds %A Ehsan Emamjomeh-Zadeh %A Chen-Yu Wei %A Haipeng Luo %A David Kempe %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-emamjomeh-zadeh21a %I PMLR %P 599--618 %U https://proceedings.mlr.press/v132/emamjomeh-zadeh21a.html %V 132 %X We revisit the problem of online learning with sleeping experts/bandits: in each time step, only a subset of the actions are available for the algorithm to choose from (and learn about). The work of Kleinberg et al. (2010) showed that there exist no-regret algorithms which perform no worse than the best ranking of actions asymptotically. Unfortunately, achieving this regret bound appears computationally hard: Kanade and Steinke (2014) showed that achieving this no-regret performance is at least as hard as PAC-learning DNFs, a notoriously difficult problem. In the present work, we relax the original problem and study computationally efficient no-approximate-regret algorithms: such algorithms may exceed the optimal cost by a multiplicative constant in addition to the additive regret. We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems. For several canonical special cases of the problem, we give algorithms with significantly better approximation ratios; these algorithms also illustrate different techniques for achieving no-approximate-regret guarantees.
APA
Emamjomeh-Zadeh, E., Wei, C., Luo, H. & Kempe, D.. (2021). Adversarial Online Learning with Changing Action Sets: Efficient Algorithms with Approximate Regret Bounds. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:599-618 Available from https://proceedings.mlr.press/v132/emamjomeh-zadeh21a.html.

Related Material