Embedding Dimension of Polyhedral Losses

Jessie Finocchiaro, Rafael Frongillo, Bo Waggoner
; Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:1558-1585, 2020.

Abstract

A common technique in supervised learning with discrete losses, such as 0-1 loss, is to optimize a convex surrogate loss over Rd, calibrated with respect to the original loss. In particular, recent work has investigated embedding the original predictions (e.g. labels) as points in Rd, showing an equivalence to using polyhedral surrogates. In this work, we study the notion of the embedding dimension of a given discrete loss: the minimum dimension d such that an embedding exists. We characterize d-embeddability for all d, with a particularly tight characterization for d=1 (embedding into the real line), and useful necessary conditions for d>1 in the form of a quadratic feasibility program. We illustrate our results with novel lower bounds for abstain loss.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-finocchiaro20a, title = {Embedding Dimension of Polyhedral Losses}, author = {Finocchiaro, Jessie and Frongillo, Rafael and Waggoner, Bo}, pages = {1558--1585}, year = {2020}, editor = {Jacob Abernethy and Shivani Agarwal}, volume = {125}, series = {Proceedings of Machine Learning Research}, address = {}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/finocchiaro20a/finocchiaro20a.pdf}, url = {http://proceedings.mlr.press/v125/finocchiaro20a.html}, abstract = { A common technique in supervised learning with discrete losses, such as 0-1 loss, is to optimize a convex surrogate loss over Rd, calibrated with respect to the original loss. In particular, recent work has investigated embedding the original predictions (e.g. labels) as points in Rd, showing an equivalence to using polyhedral surrogates. In this work, we study the notion of the embedding dimension of a given discrete loss: the minimum dimension d such that an embedding exists. We characterize d-embeddability for all d, with a particularly tight characterization for d=1 (embedding into the real line), and useful necessary conditions for d>1 in the form of a quadratic feasibility program. We illustrate our results with novel lower bounds for abstain loss.} }
Endnote
%0 Conference Paper %T Embedding Dimension of Polyhedral Losses %A Jessie Finocchiaro %A Rafael Frongillo %A Bo Waggoner %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-finocchiaro20a %I PMLR %J Proceedings of Machine Learning Research %P 1558--1585 %U http://proceedings.mlr.press %V 125 %W PMLR %X A common technique in supervised learning with discrete losses, such as 0-1 loss, is to optimize a convex surrogate loss over Rd, calibrated with respect to the original loss. In particular, recent work has investigated embedding the original predictions (e.g. labels) as points in Rd, showing an equivalence to using polyhedral surrogates. In this work, we study the notion of the embedding dimension of a given discrete loss: the minimum dimension d such that an embedding exists. We characterize d-embeddability for all d, with a particularly tight characterization for d=1 (embedding into the real line), and useful necessary conditions for d>1 in the form of a quadratic feasibility program. We illustrate our results with novel lower bounds for abstain loss.
APA
Finocchiaro, J., Frongillo, R. & Waggoner, B.. (2020). Embedding Dimension of Polyhedral Losses. Proceedings of Thirty Third Conference on Learning Theory, in PMLR 125:1558-1585

Related Material