Fine-Grained Distribution-Dependent Learning Curves

Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5890-5924, 2023.

Abstract

Learning curves plot the expected error of a learning algorithm as a function of the number of labeled samples it receives from a target distribution. They are widely used as a measure of an algorithm’s performance, but classic PAC learning theory cannot explain their behavior. As observed by Antos and Lugosi (1996, 1998), the classic ‘No Free Lunch’ lower bounds only trace the upper envelope above all learning curves of specific target distributions. For a concept class with VC dimension $d$ the classic bound decays like $d/n$, yet it is possible that the learning curve for \emph{every} specific distribution decays exponentially. In this case, for each $n$ there exists a different ‘hard’ distribution requiring $d/n$ samples. Antos and Lugosi asked which concept classes admit a ‘strong minimax lower bound’ – a lower bound of $d’/n$ that holds for a fixed distribution for infinitely many $n$.We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d’$ for which $d’/n$ is a strong minimax lower bound. Conceptually, the VCL dimension determines the asymptotic rate of decay of the minimax learning curve, which we call the ‘distribution-free trail’ of the class. Our characterization strengthens the lower bounds of Bousquet, Hanneke, Moran, van Handel, and Yehudayoff (2021), and it refines their analysis of learning curves, by showing that for classes with finite VCL the learning rate can be decomposed into a linear component that depends only on the hypothesis class and a faster (e.g., exponential) component that depends also on the target distribution. As a corollary, we recover the lower bound of Antos and Lugosi (1996, 1998) for half-spaces in $\mathbb{R}^d$.Finally, to provide another viewpoint on our work and how it compares to traditional PAC learning bounds, we also present an alternative formulation of our results in a language that is closer to the PAC setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-bousquet23a, title = {Fine-Grained Distribution-Dependent Learning Curves}, author = {Bousquet, Olivier and Hanneke, Steve and Moran, Shay and Shafer, Jonathan and Tolstikhin, Ilya}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5890--5924}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/bousquet23a/bousquet23a.pdf}, url = {https://proceedings.mlr.press/v195/bousquet23a.html}, abstract = {Learning curves plot the expected error of a learning algorithm as a function of the number of labeled samples it receives from a target distribution. They are widely used as a measure of an algorithm’s performance, but classic PAC learning theory cannot explain their behavior. As observed by Antos and Lugosi (1996, 1998), the classic ‘No Free Lunch’ lower bounds only trace the upper envelope above all learning curves of specific target distributions. For a concept class with VC dimension $d$ the classic bound decays like $d/n$, yet it is possible that the learning curve for \emph{every} specific distribution decays exponentially. In this case, for each $n$ there exists a different ‘hard’ distribution requiring $d/n$ samples. Antos and Lugosi asked which concept classes admit a ‘strong minimax lower bound’ – a lower bound of $d’/n$ that holds for a fixed distribution for infinitely many $n$.We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d’$ for which $d’/n$ is a strong minimax lower bound. Conceptually, the VCL dimension determines the asymptotic rate of decay of the minimax learning curve, which we call the ‘distribution-free trail’ of the class. Our characterization strengthens the lower bounds of Bousquet, Hanneke, Moran, van Handel, and Yehudayoff (2021), and it refines their analysis of learning curves, by showing that for classes with finite VCL the learning rate can be decomposed into a linear component that depends only on the hypothesis class and a faster (e.g., exponential) component that depends also on the target distribution. As a corollary, we recover the lower bound of Antos and Lugosi (1996, 1998) for half-spaces in $\mathbb{R}^d$.Finally, to provide another viewpoint on our work and how it compares to traditional PAC learning bounds, we also present an alternative formulation of our results in a language that is closer to the PAC setting.} }
Endnote
%0 Conference Paper %T Fine-Grained Distribution-Dependent Learning Curves %A Olivier Bousquet %A Steve Hanneke %A Shay Moran %A Jonathan Shafer %A Ilya Tolstikhin %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-bousquet23a %I PMLR %P 5890--5924 %U https://proceedings.mlr.press/v195/bousquet23a.html %V 195 %X Learning curves plot the expected error of a learning algorithm as a function of the number of labeled samples it receives from a target distribution. They are widely used as a measure of an algorithm’s performance, but classic PAC learning theory cannot explain their behavior. As observed by Antos and Lugosi (1996, 1998), the classic ‘No Free Lunch’ lower bounds only trace the upper envelope above all learning curves of specific target distributions. For a concept class with VC dimension $d$ the classic bound decays like $d/n$, yet it is possible that the learning curve for \emph{every} specific distribution decays exponentially. In this case, for each $n$ there exists a different ‘hard’ distribution requiring $d/n$ samples. Antos and Lugosi asked which concept classes admit a ‘strong minimax lower bound’ – a lower bound of $d’/n$ that holds for a fixed distribution for infinitely many $n$.We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d’$ for which $d’/n$ is a strong minimax lower bound. Conceptually, the VCL dimension determines the asymptotic rate of decay of the minimax learning curve, which we call the ‘distribution-free trail’ of the class. Our characterization strengthens the lower bounds of Bousquet, Hanneke, Moran, van Handel, and Yehudayoff (2021), and it refines their analysis of learning curves, by showing that for classes with finite VCL the learning rate can be decomposed into a linear component that depends only on the hypothesis class and a faster (e.g., exponential) component that depends also on the target distribution. As a corollary, we recover the lower bound of Antos and Lugosi (1996, 1998) for half-spaces in $\mathbb{R}^d$.Finally, to provide another viewpoint on our work and how it compares to traditional PAC learning bounds, we also present an alternative formulation of our results in a language that is closer to the PAC setting.
APA
Bousquet, O., Hanneke, S., Moran, S., Shafer, J. & Tolstikhin, I.. (2023). Fine-Grained Distribution-Dependent Learning Curves. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5890-5924 Available from https://proceedings.mlr.press/v195/bousquet23a.html.

Related Material