Learning Convex QP Relaxations for Structured Prediction

Jeremy Jancsary, Sebastian Nowozin, Carsten Rother
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):915-923, 2013.

Abstract

We introduce a new large margin approach to discriminative training of intractable discrete graphical models. Our approach builds on a convex quadratic programming relaxation of the MAP inference problem. The model parameters are trained directly within this restricted class of energy functions so as to optimize the predictions on the training data. We address the issue of how to parameterize the resulting model and point out its relation to existing approaches. The primary motivation behind our use of the QP relaxation is its computational efficiency; yet, empirically, its predictive accuracy compares favorably to more expensive approaches. This makes it an appealing choice for many practical tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-jancsary13, title = {Learning Convex QP Relaxations for Structured Prediction}, author = {Jancsary, Jeremy and Nowozin, Sebastian and Rother, Carsten}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {915--923}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/jancsary13.pdf}, url = {https://proceedings.mlr.press/v28/jancsary13.html}, abstract = {We introduce a new large margin approach to discriminative training of intractable discrete graphical models. Our approach builds on a convex quadratic programming relaxation of the MAP inference problem. The model parameters are trained directly within this restricted class of energy functions so as to optimize the predictions on the training data. We address the issue of how to parameterize the resulting model and point out its relation to existing approaches. The primary motivation behind our use of the QP relaxation is its computational efficiency; yet, empirically, its predictive accuracy compares favorably to more expensive approaches. This makes it an appealing choice for many practical tasks.} }
Endnote
%0 Conference Paper %T Learning Convex QP Relaxations for Structured Prediction %A Jeremy Jancsary %A Sebastian Nowozin %A Carsten Rother %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-jancsary13 %I PMLR %P 915--923 %U https://proceedings.mlr.press/v28/jancsary13.html %V 28 %N 3 %X We introduce a new large margin approach to discriminative training of intractable discrete graphical models. Our approach builds on a convex quadratic programming relaxation of the MAP inference problem. The model parameters are trained directly within this restricted class of energy functions so as to optimize the predictions on the training data. We address the issue of how to parameterize the resulting model and point out its relation to existing approaches. The primary motivation behind our use of the QP relaxation is its computational efficiency; yet, empirically, its predictive accuracy compares favorably to more expensive approaches. This makes it an appealing choice for many practical tasks.
RIS
TY - CPAPER TI - Learning Convex QP Relaxations for Structured Prediction AU - Jeremy Jancsary AU - Sebastian Nowozin AU - Carsten Rother BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-jancsary13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 915 EP - 923 L1 - http://proceedings.mlr.press/v28/jancsary13.pdf UR - https://proceedings.mlr.press/v28/jancsary13.html AB - We introduce a new large margin approach to discriminative training of intractable discrete graphical models. Our approach builds on a convex quadratic programming relaxation of the MAP inference problem. The model parameters are trained directly within this restricted class of energy functions so as to optimize the predictions on the training data. We address the issue of how to parameterize the resulting model and point out its relation to existing approaches. The primary motivation behind our use of the QP relaxation is its computational efficiency; yet, empirically, its predictive accuracy compares favorably to more expensive approaches. This makes it an appealing choice for many practical tasks. ER -
APA
Jancsary, J., Nowozin, S. & Rother, C.. (2013). Learning Convex QP Relaxations for Structured Prediction. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):915-923 Available from https://proceedings.mlr.press/v28/jancsary13.html.

Related Material