Structured Prediction with Partial Labelling through the Infimum Loss

Vivien Cabannnes, Alessandro Rudi, Francis Bach
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1230-1239, 2020.

Abstract

Annotating datasets is one of the main costs in nowadays supervised learning. The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect, as partial labelling. This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one. The problem of supervised learning with partial labelling has been studied for specific instances such as classification, multi-label, ranking or segmentation, but a general framework is still missing. This paper provides a unified framework based on structured prediction and on the concept of \emph{infimum loss} to deal with partial labelling over a wide family of learning problems and loss functions. The framework leads naturally to explicit algorithms that can be easily implemented and for which proved statistical consistency and learning rates. Experiments confirm the superiority of the proposed approach over commonly used baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-cabannnes20a, title = {Structured Prediction with Partial Labelling through the Infimum Loss}, author = {Cabannnes, Vivien and Rudi, Alessandro and Bach, Francis}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1230--1239}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/cabannnes20a/cabannnes20a.pdf}, url = { http://proceedings.mlr.press/v119/cabannnes20a.html }, abstract = {Annotating datasets is one of the main costs in nowadays supervised learning. The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect, as partial labelling. This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one. The problem of supervised learning with partial labelling has been studied for specific instances such as classification, multi-label, ranking or segmentation, but a general framework is still missing. This paper provides a unified framework based on structured prediction and on the concept of \emph{infimum loss} to deal with partial labelling over a wide family of learning problems and loss functions. The framework leads naturally to explicit algorithms that can be easily implemented and for which proved statistical consistency and learning rates. Experiments confirm the superiority of the proposed approach over commonly used baselines.} }
Endnote
%0 Conference Paper %T Structured Prediction with Partial Labelling through the Infimum Loss %A Vivien Cabannnes %A Alessandro Rudi %A Francis Bach %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-cabannnes20a %I PMLR %P 1230--1239 %U http://proceedings.mlr.press/v119/cabannnes20a.html %V 119 %X Annotating datasets is one of the main costs in nowadays supervised learning. The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect, as partial labelling. This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one. The problem of supervised learning with partial labelling has been studied for specific instances such as classification, multi-label, ranking or segmentation, but a general framework is still missing. This paper provides a unified framework based on structured prediction and on the concept of \emph{infimum loss} to deal with partial labelling over a wide family of learning problems and loss functions. The framework leads naturally to explicit algorithms that can be easily implemented and for which proved statistical consistency and learning rates. Experiments confirm the superiority of the proposed approach over commonly used baselines.
APA
Cabannnes, V., Rudi, A. & Bach, F.. (2020). Structured Prediction with Partial Labelling through the Infimum Loss. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1230-1239 Available from http://proceedings.mlr.press/v119/cabannnes20a.html .

Related Material