Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension

Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:773-836, 2023.

Abstract

A classical result in online learning characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension (Littlestone ’88).We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class $\mathcal{H}$ equals its randomized Littlestone dimension, which we define as follows: it is the largest $d$ for which there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$.We further study optimal mistake bounds in the agnostic case, as a function of the number of mistakes made by the best function in $\mathcal{H}$, denoted by $k$. Towards this end we introduce the $k$-Littlestone dimension and its randomized variant, and use them to characterize the optimal deterministic and randomized mistake bounds.Quantitatively, we show that the optimal randomized mistake bound for learning a class with Littlestone dimension $d$ is $k + \Theta (\sqrt{k d} + d )$ (equivalently, the optimal regret is $\Theta(\sqrt{kd} + d$). This also implies an optimal deterministic mistake bound of $2k + O (\sqrt{k d} + d )$, thus resolving an open question which was studied by Auer and Long [’99]. As an application of our theory, we revisit the classical problem of prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth studied prediction using expert advice, provided that the best among the $n$ experts makes at most $k$ mistakes, and asked what are the optimal mistake bounds (as a function of $n$ and $k$). Cesa-Bianchi, Freund, Helmbold, and Warmuth [’93, ’96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this question by providing an optimal learning rule in the randomized case, and showing that its expected mistake bound equals half of the deterministic bound, up to negligible additive terms. This improves upon previous works by Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth [’93, ’97], by Abernethy, Langford, and Warmuth [’06], and by Brânzei and Peres [’19], which handled the regimes $k \ll \log n$ or $k\gg \log n$. In contrast, our result applies to all pairs $n,k$, and does so via a unified analysis using the randomized Littlestone dimension.In our proofs we develop and use optimal learning rules, which can be seen as natural variants of the Standard Optimal Algorithm ($\mathsf{SOA}$) of Littlestone: a weighted variant in the agnostic case, and a probabilistic variant in the randomized case. We conclude the paper with suggested directions for future research and open questions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-filmus23a, title = {Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension}, author = {Filmus, Yuval and Hanneke, Steve and Mehalel, Idan and Moran, Shay}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {773--836}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/filmus23a/filmus23a.pdf}, url = {https://proceedings.mlr.press/v195/filmus23a.html}, abstract = {A classical result in online learning characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension (Littlestone ’88).We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class $\mathcal{H}$ equals its randomized Littlestone dimension, which we define as follows: it is the largest $d$ for which there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$.We further study optimal mistake bounds in the agnostic case, as a function of the number of mistakes made by the best function in $\mathcal{H}$, denoted by $k$. Towards this end we introduce the $k$-Littlestone dimension and its randomized variant, and use them to characterize the optimal deterministic and randomized mistake bounds.Quantitatively, we show that the optimal randomized mistake bound for learning a class with Littlestone dimension $d$ is $k + \Theta (\sqrt{k d} + d )$ (equivalently, the optimal regret is $\Theta(\sqrt{kd} + d$). This also implies an optimal deterministic mistake bound of $2k + O (\sqrt{k d} + d )$, thus resolving an open question which was studied by Auer and Long [’99]. As an application of our theory, we revisit the classical problem of prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth studied prediction using expert advice, provided that the best among the $n$ experts makes at most $k$ mistakes, and asked what are the optimal mistake bounds (as a function of $n$ and $k$). Cesa-Bianchi, Freund, Helmbold, and Warmuth [’93, ’96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this question by providing an optimal learning rule in the randomized case, and showing that its expected mistake bound equals half of the deterministic bound, up to negligible additive terms. This improves upon previous works by Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth [’93, ’97], by Abernethy, Langford, and Warmuth [’06], and by Brânzei and Peres [’19], which handled the regimes $k \ll \log n$ or $k\gg \log n$. In contrast, our result applies to all pairs $n,k$, and does so via a unified analysis using the randomized Littlestone dimension.In our proofs we develop and use optimal learning rules, which can be seen as natural variants of the Standard Optimal Algorithm ($\mathsf{SOA}$) of Littlestone: a weighted variant in the agnostic case, and a probabilistic variant in the randomized case. We conclude the paper with suggested directions for future research and open questions.} }
Endnote
%0 Conference Paper %T Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension %A Yuval Filmus %A Steve Hanneke %A Idan Mehalel %A Shay Moran %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-filmus23a %I PMLR %P 773--836 %U https://proceedings.mlr.press/v195/filmus23a.html %V 195 %X A classical result in online learning characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension (Littlestone ’88).We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class $\mathcal{H}$ equals its randomized Littlestone dimension, which we define as follows: it is the largest $d$ for which there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$.We further study optimal mistake bounds in the agnostic case, as a function of the number of mistakes made by the best function in $\mathcal{H}$, denoted by $k$. Towards this end we introduce the $k$-Littlestone dimension and its randomized variant, and use them to characterize the optimal deterministic and randomized mistake bounds.Quantitatively, we show that the optimal randomized mistake bound for learning a class with Littlestone dimension $d$ is $k + \Theta (\sqrt{k d} + d )$ (equivalently, the optimal regret is $\Theta(\sqrt{kd} + d$). This also implies an optimal deterministic mistake bound of $2k + O (\sqrt{k d} + d )$, thus resolving an open question which was studied by Auer and Long [’99]. As an application of our theory, we revisit the classical problem of prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth studied prediction using expert advice, provided that the best among the $n$ experts makes at most $k$ mistakes, and asked what are the optimal mistake bounds (as a function of $n$ and $k$). Cesa-Bianchi, Freund, Helmbold, and Warmuth [’93, ’96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this question by providing an optimal learning rule in the randomized case, and showing that its expected mistake bound equals half of the deterministic bound, up to negligible additive terms. This improves upon previous works by Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth [’93, ’97], by Abernethy, Langford, and Warmuth [’06], and by Brânzei and Peres [’19], which handled the regimes $k \ll \log n$ or $k\gg \log n$. In contrast, our result applies to all pairs $n,k$, and does so via a unified analysis using the randomized Littlestone dimension.In our proofs we develop and use optimal learning rules, which can be seen as natural variants of the Standard Optimal Algorithm ($\mathsf{SOA}$) of Littlestone: a weighted variant in the agnostic case, and a probabilistic variant in the randomized case. We conclude the paper with suggested directions for future research and open questions.
APA
Filmus, Y., Hanneke, S., Mehalel, I. & Moran, S.. (2023). Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:773-836 Available from https://proceedings.mlr.press/v195/filmus23a.html.

Related Material