Learning with metric losses

Dan Tsir Cohen, Aryeh Kontorovich
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:662-700, 2022.

Abstract

We propose a practical algorithm for learning mappings between two metric spaces, $\X$ and $\Y$. Our procedure is strongly Bayes-consistent whenever $\X$ and $\Y$ are topologically separable and $\Y$ is “bounded in expectation” (our term; the separability assumption can be somewhat weakened). At this level of generality, ours is the first such learnability result for unbounded loss in the agnostic setting. Our technique is based on metric medoids (a variant of Fréchet means) and presents a significant departure from existing methods, which, as we demonstrate, fail to achieve Bayes-consistency on general instance- and label-space metrics. Our proofs introduce the technique of {\em semi-stable compression}, which may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-cohen22a, title = {Learning with metric losses}, author = {Cohen, Dan Tsir and Kontorovich, Aryeh}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {662--700}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/cohen22a/cohen22a.pdf}, url = {https://proceedings.mlr.press/v178/cohen22a.html}, abstract = {We propose a practical algorithm for learning mappings between two metric spaces, $\X$ and $\Y$. Our procedure is strongly Bayes-consistent whenever $\X$ and $\Y$ are topologically separable and $\Y$ is “bounded in expectation” (our term; the separability assumption can be somewhat weakened). At this level of generality, ours is the first such learnability result for unbounded loss in the agnostic setting. Our technique is based on metric medoids (a variant of Fréchet means) and presents a significant departure from existing methods, which, as we demonstrate, fail to achieve Bayes-consistency on general instance- and label-space metrics. Our proofs introduce the technique of {\em semi-stable compression}, which may be of independent interest.} }
Endnote
%0 Conference Paper %T Learning with metric losses %A Dan Tsir Cohen %A Aryeh Kontorovich %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-cohen22a %I PMLR %P 662--700 %U https://proceedings.mlr.press/v178/cohen22a.html %V 178 %X We propose a practical algorithm for learning mappings between two metric spaces, $\X$ and $\Y$. Our procedure is strongly Bayes-consistent whenever $\X$ and $\Y$ are topologically separable and $\Y$ is “bounded in expectation” (our term; the separability assumption can be somewhat weakened). At this level of generality, ours is the first such learnability result for unbounded loss in the agnostic setting. Our technique is based on metric medoids (a variant of Fréchet means) and presents a significant departure from existing methods, which, as we demonstrate, fail to achieve Bayes-consistency on general instance- and label-space metrics. Our proofs introduce the technique of {\em semi-stable compression}, which may be of independent interest.
APA
Cohen, D.T. & Kontorovich, A.. (2022). Learning with metric losses. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:662-700 Available from https://proceedings.mlr.press/v178/cohen22a.html.

Related Material