No Scale Sensitive Dimension for Distribution Learning

Tosca Lechner, Shai Ben-David
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-27, 2026.

Abstract

Learning probability distributions is one of the most basic statistical learning tasks. While for many learning tasks learnability of a class can be characterized by a combinatorial dimension (like the VC-dimension for binary classification prediction), no such characterization is known for classes of probability distributions. A leap toward resolving this long-standing problem was made recently by Lechner and Ben-David who showed that there can be no \emph{scale invariant} characterization of PAC style learnability of such classes. The question of \emph{scale sensitive} characterization remained open. In this paper we fully resolve the question by showing that there can be no \emph{scale sensitive} combinatorial characterization of PAC style learnability of classes of probability distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v313-lechner26a, title = {No Scale Sensitive Dimension for Distribution Learning}, author = {Lechner, Tosca and Ben-David, Shai}, booktitle = {Proceedings of The 37th International Conference on Algorithmic Learning Theory}, pages = {1--27}, year = {2026}, editor = {Telgarsky, Matus and Ullman, Jonathan}, volume = {313}, series = {Proceedings of Machine Learning Research}, month = {23--26 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v313/main/assets/lechner26a/lechner26a.pdf}, url = {https://proceedings.mlr.press/v313/lechner26a.html}, abstract = {Learning probability distributions is one of the most basic statistical learning tasks. While for many learning tasks learnability of a class can be characterized by a combinatorial dimension (like the VC-dimension for binary classification prediction), no such characterization is known for classes of probability distributions. A leap toward resolving this long-standing problem was made recently by Lechner and Ben-David who showed that there can be no \emph{scale invariant} characterization of PAC style learnability of such classes. The question of \emph{scale sensitive} characterization remained open. In this paper we fully resolve the question by showing that there can be no \emph{scale sensitive} combinatorial characterization of PAC style learnability of classes of probability distributions.} }
Endnote
%0 Conference Paper %T No Scale Sensitive Dimension for Distribution Learning %A Tosca Lechner %A Shai Ben-David %B Proceedings of The 37th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2026 %E Matus Telgarsky %E Jonathan Ullman %F pmlr-v313-lechner26a %I PMLR %P 1--27 %U https://proceedings.mlr.press/v313/lechner26a.html %V 313 %X Learning probability distributions is one of the most basic statistical learning tasks. While for many learning tasks learnability of a class can be characterized by a combinatorial dimension (like the VC-dimension for binary classification prediction), no such characterization is known for classes of probability distributions. A leap toward resolving this long-standing problem was made recently by Lechner and Ben-David who showed that there can be no \emph{scale invariant} characterization of PAC style learnability of such classes. The question of \emph{scale sensitive} characterization remained open. In this paper we fully resolve the question by showing that there can be no \emph{scale sensitive} combinatorial characterization of PAC style learnability of classes of probability distributions.
APA
Lechner, T. & Ben-David, S.. (2026). No Scale Sensitive Dimension for Distribution Learning. Proceedings of The 37th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 313:1-27 Available from https://proceedings.mlr.press/v313/lechner26a.html.

Related Material