Enhanced H-Consistency Bounds

Anqi Mao, Mehryar Mohri, Yutao Zhong
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:772-813, 2025.

Abstract

Recent research has introduced a key notion of H-consistency bounds for surrogate losses. These bounds offer finite-sample guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. However, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable H-consistency bounds? In this work, we relax this condition and present a general framework for establishing enhanced H-consistency bounds based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of more favorable bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-mao25a, title = {Enhanced $H$-Consistency Bounds}, author = {Mao, Anqi and Mohri, Mehryar and Zhong, Yutao}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {772--813}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/mao25a/mao25a.pdf}, url = {https://proceedings.mlr.press/v272/mao25a.html}, abstract = {Recent research has introduced a key notion of $H$-consistency bounds for surrogate losses. These bounds offer finite-sample guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. However, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable $H$-consistency bounds? In this work, we relax this condition and present a general framework for establishing enhanced $H$-consistency bounds based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of more favorable bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking.} }
Endnote
%0 Conference Paper %T Enhanced $H$-Consistency Bounds %A Anqi Mao %A Mehryar Mohri %A Yutao Zhong %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-mao25a %I PMLR %P 772--813 %U https://proceedings.mlr.press/v272/mao25a.html %V 272 %X Recent research has introduced a key notion of $H$-consistency bounds for surrogate losses. These bounds offer finite-sample guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. However, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable $H$-consistency bounds? In this work, we relax this condition and present a general framework for establishing enhanced $H$-consistency bounds based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of more favorable bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking.
APA
Mao, A., Mohri, M. & Zhong, Y.. (2025). Enhanced $H$-Consistency Bounds. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:772-813 Available from https://proceedings.mlr.press/v272/mao25a.html.

Related Material