Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity

Marta Catalano, Hugo Lavenant
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:5841-5861, 2024.

Abstract

Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-catalano24a, title = {Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity}, author = {Catalano, Marta and Lavenant, Hugo}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {5841--5861}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/catalano24a/catalano24a.pdf}, url = {https://proceedings.mlr.press/v235/catalano24a.html}, abstract = {Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.} }
Endnote
%0 Conference Paper %T Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity %A Marta Catalano %A Hugo Lavenant %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-catalano24a %I PMLR %P 5841--5861 %U https://proceedings.mlr.press/v235/catalano24a.html %V 235 %X Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.
APA
Catalano, M. & Lavenant, H.. (2024). Hierarchical Integral Probability Metrics: A distance on random probability measures with low sample complexity. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:5841-5861 Available from https://proceedings.mlr.press/v235/catalano24a.html.

Related Material