A Complete Characterization of Learnability for Stochastic Noisy Bandits

Steve Hanneke, Kun Wang
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:560-577, 2025.

Abstract

We study the stochastic noisy bandit problem with an unknown reward function f in a known function class F. Formally, a model M maps arms π to a probability distribution M(π) of reward. A model class M is a collection of models. For each model M, define its mean reward function fM(π)=ErM(π)[r]. In the bandit learning problem, we proceed in rounds, pulling one arm π each round and observing a reward sampled from M(π). With knowledge of M, supposing that the true model MM, the objective is to identify an arm ˆπ of near-maximal mean reward fM(ˆπ) with high probability in a bounded number of rounds. If this is possible, then the model class is said to be learnable. Importantly, a result of Hanneke and Yang (2023) shows there exist model classes for which learnability is undecidable. However, the model class they consider features deterministic rewards, and they raise the question of whether learnability is decidable for classes containing sufficiently noisy models. More formally, for any function class F of mean reward functions, we denote by MF the set of all models M such that fMF. In other words, MF admits arbitrary zero-mean noise. Hanneke and Yang (2023) ask the question: Can one give a simple complete characterization of which function classes F satisfy that MF is learnable? For the first time, we answer this question in the positive by giving a complete characterization of learnability for model classes MF. In addition to that, we also describe the full spectrum of possible optimal query complexities. Further, we prove adaptivity is sometimes necessary to achieve the optimal query complexity. Last, we revisit an important complexity measure for interactive decision making, the Decision-Estimation-Coefficient (Foster et al., 2021, 2023), and propose a new variant of the DEC which also characterizes learnability in this setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-hanneke25c, title = {A Complete Characterization of Learnability for Stochastic Noisy Bandits}, author = {Hanneke, Steve and Wang, Kun}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {560--577}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/hanneke25c/hanneke25c.pdf}, url = {https://proceedings.mlr.press/v272/hanneke25c.html}, abstract = {We study the stochastic noisy bandit problem with an unknown reward function $f^*$ in a known function class $\mathcal{F}$. Formally, a model $M$ maps arms $\pi$ to a probability distribution $M(\pi)$ of reward. A model class $\mathcal{M}$ is a collection of models. For each model $M$, define its mean reward function $f^M(\pi)=\mathbb{E}_{r \sim M(\pi)}[r]$. In the bandit learning problem, we proceed in rounds, pulling one arm $\pi$ each round and observing a reward sampled from $M(\pi)$. With knowledge of $\mathcal{M}$, supposing that the true model $M\in \mathcal{M}$, the objective is to identify an arm $\hat{\pi}$ of near-maximal mean reward $f^M(\hat{\pi})$ with high probability in a bounded number of rounds. If this is possible, then the model class is said to be learnable. Importantly, a result of Hanneke and Yang (2023) shows there exist model classes for which learnability is undecidable. However, the model class they consider features deterministic rewards, and they raise the question of whether learnability is decidable for classes containing sufficiently noisy models. More formally, for any function class $\mathcal{F}$ of mean reward functions, we denote by $\mathcal{M}_{\mathcal{F}}$ the set of all models $M$ such that $f^M \in \mathcal{F}$. In other words, $\mathcal{M}_{\mathcal{F}}$ admits arbitrary zero-mean noise. Hanneke and Yang (2023) ask the question: Can one give a simple complete characterization of which function classes $\mathcal{F}$ satisfy that $\mathcal{M}_{\mathcal{F}}$ is learnable? For the first time, we answer this question in the positive by giving a complete characterization of learnability for model classes $\mathcal{M}_{\mathcal{F}}$. In addition to that, we also describe the full spectrum of possible optimal query complexities. Further, we prove adaptivity is sometimes necessary to achieve the optimal query complexity. Last, we revisit an important complexity measure for interactive decision making, the Decision-Estimation-Coefficient (Foster et al., 2021, 2023), and propose a new variant of the DEC which also characterizes learnability in this setting.} }
Endnote
%0 Conference Paper %T A Complete Characterization of Learnability for Stochastic Noisy Bandits %A Steve Hanneke %A Kun Wang %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-hanneke25c %I PMLR %P 560--577 %U https://proceedings.mlr.press/v272/hanneke25c.html %V 272 %X We study the stochastic noisy bandit problem with an unknown reward function $f^*$ in a known function class $\mathcal{F}$. Formally, a model $M$ maps arms $\pi$ to a probability distribution $M(\pi)$ of reward. A model class $\mathcal{M}$ is a collection of models. For each model $M$, define its mean reward function $f^M(\pi)=\mathbb{E}_{r \sim M(\pi)}[r]$. In the bandit learning problem, we proceed in rounds, pulling one arm $\pi$ each round and observing a reward sampled from $M(\pi)$. With knowledge of $\mathcal{M}$, supposing that the true model $M\in \mathcal{M}$, the objective is to identify an arm $\hat{\pi}$ of near-maximal mean reward $f^M(\hat{\pi})$ with high probability in a bounded number of rounds. If this is possible, then the model class is said to be learnable. Importantly, a result of Hanneke and Yang (2023) shows there exist model classes for which learnability is undecidable. However, the model class they consider features deterministic rewards, and they raise the question of whether learnability is decidable for classes containing sufficiently noisy models. More formally, for any function class $\mathcal{F}$ of mean reward functions, we denote by $\mathcal{M}_{\mathcal{F}}$ the set of all models $M$ such that $f^M \in \mathcal{F}$. In other words, $\mathcal{M}_{\mathcal{F}}$ admits arbitrary zero-mean noise. Hanneke and Yang (2023) ask the question: Can one give a simple complete characterization of which function classes $\mathcal{F}$ satisfy that $\mathcal{M}_{\mathcal{F}}$ is learnable? For the first time, we answer this question in the positive by giving a complete characterization of learnability for model classes $\mathcal{M}_{\mathcal{F}}$. In addition to that, we also describe the full spectrum of possible optimal query complexities. Further, we prove adaptivity is sometimes necessary to achieve the optimal query complexity. Last, we revisit an important complexity measure for interactive decision making, the Decision-Estimation-Coefficient (Foster et al., 2021, 2023), and propose a new variant of the DEC which also characterizes learnability in this setting.
APA
Hanneke, S. & Wang, K.. (2025). A Complete Characterization of Learnability for Stochastic Noisy Bandits. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:560-577 Available from https://proceedings.mlr.press/v272/hanneke25c.html.

Related Material