On Consistent Surrogate Risk Minimization and Property Elicitation

Arpit Agarwal, Shivani Agarwal
; Proceedings of The 28th Conference on Learning Theory, PMLR 40:4-22, 2015.

Abstract

Surrogate risk minimization is a popular framework for supervised learning; property elicitation is a widely studied area in probability forecasting, machine learning, statistics and economics. In this paper, we connect these two themes by showing that calibrated surrogate losses in supervised learning can essentially be viewed as eliciting or estimating certain properties of the underlying conditional label distribution that are sufficient to construct an optimal classifier under the target loss of interest. Our study helps to shed light on the design of convex calibrated surrogates. We also give a new framework for designing convex calibrated surrogates under low-noise conditions by eliciting properties that allow one to construct ‘coarse’ estimates of the underlying distribution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Agarwal15, title = {On Consistent Surrogate Risk Minimization and Property Elicitation}, author = {Arpit Agarwal and Shivani Agarwal}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {4--22}, year = {2015}, editor = {Peter Grünwald and Elad Hazan and Satyen Kale}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Agarwal15.pdf}, url = {http://proceedings.mlr.press/v40/Agarwal15.html}, abstract = {Surrogate risk minimization is a popular framework for supervised learning; property elicitation is a widely studied area in probability forecasting, machine learning, statistics and economics. In this paper, we connect these two themes by showing that calibrated surrogate losses in supervised learning can essentially be viewed as eliciting or estimating certain properties of the underlying conditional label distribution that are sufficient to construct an optimal classifier under the target loss of interest. Our study helps to shed light on the design of convex calibrated surrogates. We also give a new framework for designing convex calibrated surrogates under low-noise conditions by eliciting properties that allow one to construct ‘coarse’ estimates of the underlying distribution.} }
Endnote
%0 Conference Paper %T On Consistent Surrogate Risk Minimization and Property Elicitation %A Arpit Agarwal %A Shivani Agarwal %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Agarwal15 %I PMLR %J Proceedings of Machine Learning Research %P 4--22 %U http://proceedings.mlr.press %V 40 %W PMLR %X Surrogate risk minimization is a popular framework for supervised learning; property elicitation is a widely studied area in probability forecasting, machine learning, statistics and economics. In this paper, we connect these two themes by showing that calibrated surrogate losses in supervised learning can essentially be viewed as eliciting or estimating certain properties of the underlying conditional label distribution that are sufficient to construct an optimal classifier under the target loss of interest. Our study helps to shed light on the design of convex calibrated surrogates. We also give a new framework for designing convex calibrated surrogates under low-noise conditions by eliciting properties that allow one to construct ‘coarse’ estimates of the underlying distribution.
RIS
TY - CPAPER TI - On Consistent Surrogate Risk Minimization and Property Elicitation AU - Arpit Agarwal AU - Shivani Agarwal BT - Proceedings of The 28th Conference on Learning Theory PY - 2015/06/26 DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Agarwal15 PB - PMLR SP - 4 DP - PMLR EP - 22 L1 - http://proceedings.mlr.press/v40/Agarwal15.pdf UR - http://proceedings.mlr.press/v40/Agarwal15.html AB - Surrogate risk minimization is a popular framework for supervised learning; property elicitation is a widely studied area in probability forecasting, machine learning, statistics and economics. In this paper, we connect these two themes by showing that calibrated surrogate losses in supervised learning can essentially be viewed as eliciting or estimating certain properties of the underlying conditional label distribution that are sufficient to construct an optimal classifier under the target loss of interest. Our study helps to shed light on the design of convex calibrated surrogates. We also give a new framework for designing convex calibrated surrogates under low-noise conditions by eliciting properties that allow one to construct ‘coarse’ estimates of the underlying distribution. ER -
APA
Agarwal, A. & Agarwal, S.. (2015). On Consistent Surrogate Risk Minimization and Property Elicitation. Proceedings of The 28th Conference on Learning Theory, in PMLR 40:4-22

Related Material