Consistency of structured output learning with missing labels

[edit]

Kostiantyn Antoniuk, Vojtech Franc, Vaclav Hlavac ;
Asian Conference on Machine Learning, PMLR 45:81-95, 2016.

Abstract

In this paper we study statistical consistency of partial losses suitable for learning structured output predictors from examples containing missing labels. We provide sufficient conditions on data generating distribution which admit to prove that the expected risk of the structured predictor learned by minimizing the partial loss converges to the optimal Bayes risk defined by an associated complete loss. We define a concept of surrogate classification calibrated partial losses which are easier to optimize yet their minimization preserves the statistical consistency. We give some concrete examples of surrogate partial losses which are classification calibrated. In particular, we show that the ramp-loss which is in the core of many existing algorithms is classification calibrated.

Related Material