Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction

Sébastien Giguère, François Laviolette, Mario Marchand, Khadidja Sylla
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):107-114, 2013.

Abstract

We provide rigorous guarantees for the regression approach to structured output prediction. We show that the quadratic regression loss is a convex surrogate of the prediction loss when the output kernel satisfies some condition with respect to the prediction loss. We provide two upper bounds of the prediction risk that depend on the empirical quadratic risk of the predictor. The minimizer of the first bound is the predictor proposed by Cortes et al. (2007) while the minimizer of the second bound is a predictor that has never been proposed so far. Both predictors are compared on practical tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-giguere13, title = {Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction}, author = {Giguère, Sébastien and Laviolette, François and Marchand, Mario and Sylla, Khadidja}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {107--114}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/giguere13.pdf}, url = {https://proceedings.mlr.press/v28/giguere13.html}, abstract = {We provide rigorous guarantees for the regression approach to structured output prediction. We show that the quadratic regression loss is a convex surrogate of the prediction loss when the output kernel satisfies some condition with respect to the prediction loss. We provide two upper bounds of the prediction risk that depend on the empirical quadratic risk of the predictor. The minimizer of the first bound is the predictor proposed by Cortes et al. (2007) while the minimizer of the second bound is a predictor that has never been proposed so far. Both predictors are compared on practical tasks. } }
Endnote
%0 Conference Paper %T Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction %A Sébastien Giguère %A François Laviolette %A Mario Marchand %A Khadidja Sylla %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-giguere13 %I PMLR %P 107--114 %U https://proceedings.mlr.press/v28/giguere13.html %V 28 %N 1 %X We provide rigorous guarantees for the regression approach to structured output prediction. We show that the quadratic regression loss is a convex surrogate of the prediction loss when the output kernel satisfies some condition with respect to the prediction loss. We provide two upper bounds of the prediction risk that depend on the empirical quadratic risk of the predictor. The minimizer of the first bound is the predictor proposed by Cortes et al. (2007) while the minimizer of the second bound is a predictor that has never been proposed so far. Both predictors are compared on practical tasks.
RIS
TY - CPAPER TI - Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction AU - Sébastien Giguère AU - François Laviolette AU - Mario Marchand AU - Khadidja Sylla BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-giguere13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 107 EP - 114 L1 - http://proceedings.mlr.press/v28/giguere13.pdf UR - https://proceedings.mlr.press/v28/giguere13.html AB - We provide rigorous guarantees for the regression approach to structured output prediction. We show that the quadratic regression loss is a convex surrogate of the prediction loss when the output kernel satisfies some condition with respect to the prediction loss. We provide two upper bounds of the prediction risk that depend on the empirical quadratic risk of the predictor. The minimizer of the first bound is the predictor proposed by Cortes et al. (2007) while the minimizer of the second bound is a predictor that has never been proposed so far. Both predictors are compared on practical tasks. ER -
APA
Giguère, S., Laviolette, F., Marchand, M. & Sylla, K.. (2013). Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):107-114 Available from https://proceedings.mlr.press/v28/giguere13.html.

Related Material