Online Boosting with Bandit Feedback

Nataly Brukhim, Elad Hazan
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:397-420, 2021.

Abstract

We consider the problem of online boosting for regression tasks, when only limited information is available to the learner. This setting is motivated by applications in reinforcement learning, in which only partial feedback is provided to the learner. We give an efficient regret minimization method that has two implications. First, we describe an online boosting algorithm with noisy multi-point bandit feedback. Next, we give a new projection-free online convex optimization algorithm with stochastic gradient access, that improves state-of-the-art guarantees in terms of efficiency. Our analysis offers a novel way of incorporating stochastic gradient estimators within Frank-Wolfe-type methods, which circumvents the instability encountered when directly applying projection-free optimization to the stochastic setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-brukhim21a, title = {Online Boosting with Bandit Feedback}, author = {Brukhim, Nataly and Hazan, Elad}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {397--420}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/brukhim21a/brukhim21a.pdf}, url = {https://proceedings.mlr.press/v132/brukhim21a.html}, abstract = {We consider the problem of online boosting for regression tasks, when only limited information is available to the learner. This setting is motivated by applications in reinforcement learning, in which only partial feedback is provided to the learner. We give an efficient regret minimization method that has two implications. First, we describe an online boosting algorithm with noisy multi-point bandit feedback. Next, we give a new projection-free online convex optimization algorithm with stochastic gradient access, that improves state-of-the-art guarantees in terms of efficiency. Our analysis offers a novel way of incorporating stochastic gradient estimators within Frank-Wolfe-type methods, which circumvents the instability encountered when directly applying projection-free optimization to the stochastic setting.} }
Endnote
%0 Conference Paper %T Online Boosting with Bandit Feedback %A Nataly Brukhim %A Elad Hazan %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-brukhim21a %I PMLR %P 397--420 %U https://proceedings.mlr.press/v132/brukhim21a.html %V 132 %X We consider the problem of online boosting for regression tasks, when only limited information is available to the learner. This setting is motivated by applications in reinforcement learning, in which only partial feedback is provided to the learner. We give an efficient regret minimization method that has two implications. First, we describe an online boosting algorithm with noisy multi-point bandit feedback. Next, we give a new projection-free online convex optimization algorithm with stochastic gradient access, that improves state-of-the-art guarantees in terms of efficiency. Our analysis offers a novel way of incorporating stochastic gradient estimators within Frank-Wolfe-type methods, which circumvents the instability encountered when directly applying projection-free optimization to the stochastic setting.
APA
Brukhim, N. & Hazan, E.. (2021). Online Boosting with Bandit Feedback. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:397-420 Available from https://proceedings.mlr.press/v132/brukhim21a.html.

Related Material