Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes

Yichun Hu, Nathan Kallus, Xiaojie Mao
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2007-2010, 2020.

Abstract

We study a nonparametric contextual bandit problem where the expected reward functions belong to a Hölder class with smoothness parameter $\beta$. We show how this interpolates between two extremes that were previously studied in isolation: non-differentiable bandits ($\beta\leq1$), where rate-optimal regret is achieved by running separate non-contextual bandits in different context regions, and parametric-response bandits (satisfying $\beta=\infty$), where rate-optimal regret can be achieved with minimal or no exploration due to infinite extrapolatability. We develop a novel algorithm that carefully adjusts to all smoothness settings and we prove its regret is rate-optimal by establishing matching upper and lower bounds, recovering the existing results at the two extremes. In this sense, our work bridges the gap between the existing literature on parametric and non-differentiable contextual bandit problems and between bandit algorithms that exclusively use global or local information, shedding light on the crucial interplay of complexity and regret in contextual bandits.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-hu20a, title = {Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes}, author = {Hu, Yichun and Kallus, Nathan and Mao, Xiaojie}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {2007--2010}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/hu20a/hu20a.pdf}, url = {https://proceedings.mlr.press/v125/hu20a.html}, abstract = { We study a nonparametric contextual bandit problem where the expected reward functions belong to a Hölder class with smoothness parameter $\beta$. We show how this interpolates between two extremes that were previously studied in isolation: non-differentiable bandits ($\beta\leq1$), where rate-optimal regret is achieved by running separate non-contextual bandits in different context regions, and parametric-response bandits (satisfying $\beta=\infty$), where rate-optimal regret can be achieved with minimal or no exploration due to infinite extrapolatability. We develop a novel algorithm that carefully adjusts to all smoothness settings and we prove its regret is rate-optimal by establishing matching upper and lower bounds, recovering the existing results at the two extremes. In this sense, our work bridges the gap between the existing literature on parametric and non-differentiable contextual bandit problems and between bandit algorithms that exclusively use global or local information, shedding light on the crucial interplay of complexity and regret in contextual bandits.} }
Endnote
%0 Conference Paper %T Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes %A Yichun Hu %A Nathan Kallus %A Xiaojie Mao %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-hu20a %I PMLR %P 2007--2010 %U https://proceedings.mlr.press/v125/hu20a.html %V 125 %X We study a nonparametric contextual bandit problem where the expected reward functions belong to a Hölder class with smoothness parameter $\beta$. We show how this interpolates between two extremes that were previously studied in isolation: non-differentiable bandits ($\beta\leq1$), where rate-optimal regret is achieved by running separate non-contextual bandits in different context regions, and parametric-response bandits (satisfying $\beta=\infty$), where rate-optimal regret can be achieved with minimal or no exploration due to infinite extrapolatability. We develop a novel algorithm that carefully adjusts to all smoothness settings and we prove its regret is rate-optimal by establishing matching upper and lower bounds, recovering the existing results at the two extremes. In this sense, our work bridges the gap between the existing literature on parametric and non-differentiable contextual bandit problems and between bandit algorithms that exclusively use global or local information, shedding light on the crucial interplay of complexity and regret in contextual bandits.
APA
Hu, Y., Kallus, N. & Mao, X.. (2020). Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:2007-2010 Available from https://proceedings.mlr.press/v125/hu20a.html.

Related Material