Learning Submodular Losses with the Lovasz Hinge

Jiaqian Yu, Matthew Blaschko
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1623-1631, 2015.

Abstract

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lovasz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-yub15, title = {Learning Submodular Losses with the Lovasz Hinge}, author = {Yu, Jiaqian and Blaschko, Matthew}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1623--1631}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/yub15.pdf}, url = {https://proceedings.mlr.press/v37/yub15.html}, abstract = {Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lovasz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task.} }
Endnote
%0 Conference Paper %T Learning Submodular Losses with the Lovasz Hinge %A Jiaqian Yu %A Matthew Blaschko %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-yub15 %I PMLR %P 1623--1631 %U https://proceedings.mlr.press/v37/yub15.html %V 37 %X Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lovasz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task.
RIS
TY - CPAPER TI - Learning Submodular Losses with the Lovasz Hinge AU - Jiaqian Yu AU - Matthew Blaschko BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-yub15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1623 EP - 1631 L1 - http://proceedings.mlr.press/v37/yub15.pdf UR - https://proceedings.mlr.press/v37/yub15.html AB - Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lovasz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task. ER -
APA
Yu, J. & Blaschko, M.. (2015). Learning Submodular Losses with the Lovasz Hinge. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1623-1631 Available from https://proceedings.mlr.press/v37/yub15.html.

Related Material