Consistency of structured output learning with missing labels

Kostiantyn Antoniuk, Vojtech Franc, Vaclav Hlavac
Asian Conference on Machine Learning, PMLR 45:81-95, 2016.

Abstract

In this paper we study statistical consistency of partial losses suitable for learning structured output predictors from examples containing missing labels. We provide sufficient conditions on data generating distribution which admit to prove that the expected risk of the structured predictor learned by minimizing the partial loss converges to the optimal Bayes risk defined by an associated complete loss. We define a concept of surrogate classification calibrated partial losses which are easier to optimize yet their minimization preserves the statistical consistency. We give some concrete examples of surrogate partial losses which are classification calibrated. In particular, we show that the ramp-loss which is in the core of many existing algorithms is classification calibrated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v45-Antoniuk15, title = {Consistency of structured output learning with missing labels}, author = {Antoniuk, Kostiantyn and Franc, Vojtech and Hlavac, Vaclav}, booktitle = {Asian Conference on Machine Learning}, pages = {81--95}, year = {2016}, editor = {Holmes, Geoffrey and Liu, Tie-Yan}, volume = {45}, series = {Proceedings of Machine Learning Research}, address = {Hong Kong}, month = {20--22 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v45/Antoniuk15.pdf}, url = {https://proceedings.mlr.press/v45/Antoniuk15.html}, abstract = {In this paper we study statistical consistency of partial losses suitable for learning structured output predictors from examples containing missing labels. We provide sufficient conditions on data generating distribution which admit to prove that the expected risk of the structured predictor learned by minimizing the partial loss converges to the optimal Bayes risk defined by an associated complete loss. We define a concept of surrogate classification calibrated partial losses which are easier to optimize yet their minimization preserves the statistical consistency. We give some concrete examples of surrogate partial losses which are classification calibrated. In particular, we show that the ramp-loss which is in the core of many existing algorithms is classification calibrated.} }
Endnote
%0 Conference Paper %T Consistency of structured output learning with missing labels %A Kostiantyn Antoniuk %A Vojtech Franc %A Vaclav Hlavac %B Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Geoffrey Holmes %E Tie-Yan Liu %F pmlr-v45-Antoniuk15 %I PMLR %P 81--95 %U https://proceedings.mlr.press/v45/Antoniuk15.html %V 45 %X In this paper we study statistical consistency of partial losses suitable for learning structured output predictors from examples containing missing labels. We provide sufficient conditions on data generating distribution which admit to prove that the expected risk of the structured predictor learned by minimizing the partial loss converges to the optimal Bayes risk defined by an associated complete loss. We define a concept of surrogate classification calibrated partial losses which are easier to optimize yet their minimization preserves the statistical consistency. We give some concrete examples of surrogate partial losses which are classification calibrated. In particular, we show that the ramp-loss which is in the core of many existing algorithms is classification calibrated.
RIS
TY - CPAPER TI - Consistency of structured output learning with missing labels AU - Kostiantyn Antoniuk AU - Vojtech Franc AU - Vaclav Hlavac BT - Asian Conference on Machine Learning DA - 2016/02/25 ED - Geoffrey Holmes ED - Tie-Yan Liu ID - pmlr-v45-Antoniuk15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 45 SP - 81 EP - 95 L1 - http://proceedings.mlr.press/v45/Antoniuk15.pdf UR - https://proceedings.mlr.press/v45/Antoniuk15.html AB - In this paper we study statistical consistency of partial losses suitable for learning structured output predictors from examples containing missing labels. We provide sufficient conditions on data generating distribution which admit to prove that the expected risk of the structured predictor learned by minimizing the partial loss converges to the optimal Bayes risk defined by an associated complete loss. We define a concept of surrogate classification calibrated partial losses which are easier to optimize yet their minimization preserves the statistical consistency. We give some concrete examples of surrogate partial losses which are classification calibrated. In particular, we show that the ramp-loss which is in the core of many existing algorithms is classification calibrated. ER -
APA
Antoniuk, K., Franc, V. & Hlavac, V.. (2016). Consistency of structured output learning with missing labels. Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 45:81-95 Available from https://proceedings.mlr.press/v45/Antoniuk15.html.

Related Material