$H$-Consistency Bounds for Pairwise Misranking Loss Surrogates

Anqi Mao, Mehryar Mohri, Yutao Zhong
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:23743-23802, 2023.

Abstract

We present a detailed study of $H$-consistency bounds for score-based ranking. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. We will show that both in the general pairwise ranking scenario and in the bipartite ranking scenario, there are no meaningful $H$-consistency bounds for most hypothesis sets used in practice including the family of linear models and that of the neural networks, which satisfy the equicontinuous property with respect to the input. To come up with ranking surrogate losses with theoretical guarantees, we show that a natural solution consists of resorting to a pairwise abstention loss in the general pairwise ranking scenario, and similarly, a bipartite abstention loss in the bipartite ranking scenario, to abstain from making predictions at some limited cost $c$. For surrogate losses of these abstention loss functions, we give a series of $H$-consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer. Our experimental results illustrate the effectiveness of ranking with abstention.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-mao23a, title = {$H$-Consistency Bounds for Pairwise Misranking Loss Surrogates}, author = {Mao, Anqi and Mohri, Mehryar and Zhong, Yutao}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {23743--23802}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/mao23a/mao23a.pdf}, url = {https://proceedings.mlr.press/v202/mao23a.html}, abstract = {We present a detailed study of $H$-consistency bounds for score-based ranking. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. We will show that both in the general pairwise ranking scenario and in the bipartite ranking scenario, there are no meaningful $H$-consistency bounds for most hypothesis sets used in practice including the family of linear models and that of the neural networks, which satisfy the equicontinuous property with respect to the input. To come up with ranking surrogate losses with theoretical guarantees, we show that a natural solution consists of resorting to a pairwise abstention loss in the general pairwise ranking scenario, and similarly, a bipartite abstention loss in the bipartite ranking scenario, to abstain from making predictions at some limited cost $c$. For surrogate losses of these abstention loss functions, we give a series of $H$-consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer. Our experimental results illustrate the effectiveness of ranking with abstention.} }
Endnote
%0 Conference Paper %T $H$-Consistency Bounds for Pairwise Misranking Loss Surrogates %A Anqi Mao %A Mehryar Mohri %A Yutao Zhong %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-mao23a %I PMLR %P 23743--23802 %U https://proceedings.mlr.press/v202/mao23a.html %V 202 %X We present a detailed study of $H$-consistency bounds for score-based ranking. These are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. We will show that both in the general pairwise ranking scenario and in the bipartite ranking scenario, there are no meaningful $H$-consistency bounds for most hypothesis sets used in practice including the family of linear models and that of the neural networks, which satisfy the equicontinuous property with respect to the input. To come up with ranking surrogate losses with theoretical guarantees, we show that a natural solution consists of resorting to a pairwise abstention loss in the general pairwise ranking scenario, and similarly, a bipartite abstention loss in the bipartite ranking scenario, to abstain from making predictions at some limited cost $c$. For surrogate losses of these abstention loss functions, we give a series of $H$-consistency bounds for both the family of linear functions and that of neural networks with one hidden-layer. Our experimental results illustrate the effectiveness of ranking with abstention.
APA
Mao, A., Mohri, M. & Zhong, Y.. (2023). $H$-Consistency Bounds for Pairwise Misranking Loss Surrogates. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:23743-23802 Available from https://proceedings.mlr.press/v202/mao23a.html.

Related Material