Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds

Han Bao
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:525-547, 2023.

Abstract

Proper losses (or proper scoring rules) have been used for over half a century to elicit users’ subjective probability from the observations. In the recent machine learning community, we often tackle downstream tasks such as classification and bipartite ranking with the elicited probabilities. Here, we engage in assessing the quality of the elicited probabilities with different proper losses, which can be characterized by surrogate regret bounds to describe the convergence speed of an estimated probability to the optimal one when optimizing a proper loss. This work contributes to a sharp analysis of surrogate regret bounds in two ways. First, we provide general surrogate regret bounds for proper losses measured by the $L^1$ distance. This abstraction eschews a tailor-made analysis of each downstream task and delineates how universally a loss function operates. Our analysis relies on a classical mathematical tool known as the moduli of convexity, which is of independent interest per se. Second, we evaluate the surrogate regret bounds with polynomials to identify the quantitative convergence rate. These devices enable us to compare different losses, with which we can confirm that the lower bound of the surrogate regret bounds is $\Omega(\epsilon^{1/2})$ for popular loss functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-bao23a, title = {Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds}, author = {Bao, Han}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {525--547}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/bao23a/bao23a.pdf}, url = {https://proceedings.mlr.press/v195/bao23a.html}, abstract = {Proper losses (or proper scoring rules) have been used for over half a century to elicit users’ subjective probability from the observations. In the recent machine learning community, we often tackle downstream tasks such as classification and bipartite ranking with the elicited probabilities. Here, we engage in assessing the quality of the elicited probabilities with different proper losses, which can be characterized by surrogate regret bounds to describe the convergence speed of an estimated probability to the optimal one when optimizing a proper loss. This work contributes to a sharp analysis of surrogate regret bounds in two ways. First, we provide general surrogate regret bounds for proper losses measured by the $L^1$ distance. This abstraction eschews a tailor-made analysis of each downstream task and delineates how universally a loss function operates. Our analysis relies on a classical mathematical tool known as the moduli of convexity, which is of independent interest per se. Second, we evaluate the surrogate regret bounds with polynomials to identify the quantitative convergence rate. These devices enable us to compare different losses, with which we can confirm that the lower bound of the surrogate regret bounds is $\Omega(\epsilon^{1/2})$ for popular loss functions. } }
Endnote
%0 Conference Paper %T Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds %A Han Bao %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-bao23a %I PMLR %P 525--547 %U https://proceedings.mlr.press/v195/bao23a.html %V 195 %X Proper losses (or proper scoring rules) have been used for over half a century to elicit users’ subjective probability from the observations. In the recent machine learning community, we often tackle downstream tasks such as classification and bipartite ranking with the elicited probabilities. Here, we engage in assessing the quality of the elicited probabilities with different proper losses, which can be characterized by surrogate regret bounds to describe the convergence speed of an estimated probability to the optimal one when optimizing a proper loss. This work contributes to a sharp analysis of surrogate regret bounds in two ways. First, we provide general surrogate regret bounds for proper losses measured by the $L^1$ distance. This abstraction eschews a tailor-made analysis of each downstream task and delineates how universally a loss function operates. Our analysis relies on a classical mathematical tool known as the moduli of convexity, which is of independent interest per se. Second, we evaluate the surrogate regret bounds with polynomials to identify the quantitative convergence rate. These devices enable us to compare different losses, with which we can confirm that the lower bound of the surrogate regret bounds is $\Omega(\epsilon^{1/2})$ for popular loss functions.
APA
Bao, H.. (2023). Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:525-547 Available from https://proceedings.mlr.press/v195/bao23a.html.

Related Material