Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action Spaces

Yinglun Zhu, Paul Mineiro
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:27574-27590, 2022.

Abstract

Designing efficient general-purpose contextual bandit algorithms that work with large—or even infinite—action spaces would facilitate application to important scenarios such as information retrieval, recommendation systems, and continuous control. While obtaining standard regret guarantees can be hopeless, alternative regret notions have been proposed to tackle the large action setting. We propose a smooth regret notion for contextual bandits, which dominates previously proposed alternatives. We design a statistically and computationally efficient algorithm—for the proposed smooth regret—that works with general function approximation under standard supervised oracles. We also present an adaptive algorithm that automatically adapts to any smoothness level. Our algorithms can be used to recover the previous minimax/Pareto optimal guarantees under the standard regret, e.g., in bandit problems with multiple best arms and Lipschitz/H{ö}lder bandits. We conduct large-scale empirical evaluations demonstrating the efficacy of our proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhu22h, title = {Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action Spaces}, author = {Zhu, Yinglun and Mineiro, Paul}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {27574--27590}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhu22h/zhu22h.pdf}, url = {https://proceedings.mlr.press/v162/zhu22h.html}, abstract = {Designing efficient general-purpose contextual bandit algorithms that work with large—or even infinite—action spaces would facilitate application to important scenarios such as information retrieval, recommendation systems, and continuous control. While obtaining standard regret guarantees can be hopeless, alternative regret notions have been proposed to tackle the large action setting. We propose a smooth regret notion for contextual bandits, which dominates previously proposed alternatives. We design a statistically and computationally efficient algorithm—for the proposed smooth regret—that works with general function approximation under standard supervised oracles. We also present an adaptive algorithm that automatically adapts to any smoothness level. Our algorithms can be used to recover the previous minimax/Pareto optimal guarantees under the standard regret, e.g., in bandit problems with multiple best arms and Lipschitz/H{ö}lder bandits. We conduct large-scale empirical evaluations demonstrating the efficacy of our proposed algorithms.} }
Endnote
%0 Conference Paper %T Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action Spaces %A Yinglun Zhu %A Paul Mineiro %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhu22h %I PMLR %P 27574--27590 %U https://proceedings.mlr.press/v162/zhu22h.html %V 162 %X Designing efficient general-purpose contextual bandit algorithms that work with large—or even infinite—action spaces would facilitate application to important scenarios such as information retrieval, recommendation systems, and continuous control. While obtaining standard regret guarantees can be hopeless, alternative regret notions have been proposed to tackle the large action setting. We propose a smooth regret notion for contextual bandits, which dominates previously proposed alternatives. We design a statistically and computationally efficient algorithm—for the proposed smooth regret—that works with general function approximation under standard supervised oracles. We also present an adaptive algorithm that automatically adapts to any smoothness level. Our algorithms can be used to recover the previous minimax/Pareto optimal guarantees under the standard regret, e.g., in bandit problems with multiple best arms and Lipschitz/H{ö}lder bandits. We conduct large-scale empirical evaluations demonstrating the efficacy of our proposed algorithms.
APA
Zhu, Y. & Mineiro, P.. (2022). Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action Spaces. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:27574-27590 Available from https://proceedings.mlr.press/v162/zhu22h.html.

Related Material