Spanning Tree Approximations for Conditional Random Fields

Patrick Pletscher, Cheng Soon Ong, Joachim Buhmann
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:408-415, 2009.

Abstract

In this work we show that one can train Conditional Random Fields of intractable graphs effectively and efficiently by considering a mixture of random spanning trees of the underlying graphical model. Furthermore, we show how a maximum-likelihood estimator of such a training objective can subsequently be used for prediction on the full graph. We present experimental results which improve on the state-of-the-art. Additionally, the training objective is less sensitive to the regularization than pseudo-likelihood based training approaches. We perform the experimental validation on two classes of data sets where structure is important: image denoising and multilabel classification.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-pletscher09a, title = {Spanning Tree Approximations for Conditional Random Fields}, author = {Pletscher, Patrick and Ong, Cheng Soon and Buhmann, Joachim}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {408--415}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/pletscher09a/pletscher09a.pdf}, url = {https://proceedings.mlr.press/v5/pletscher09a.html}, abstract = {In this work we show that one can train Conditional Random Fields of intractable graphs effectively and efficiently by considering a mixture of random spanning trees of the underlying graphical model. Furthermore, we show how a maximum-likelihood estimator of such a training objective can subsequently be used for prediction on the full graph. We present experimental results which improve on the state-of-the-art. Additionally, the training objective is less sensitive to the regularization than pseudo-likelihood based training approaches. We perform the experimental validation on two classes of data sets where structure is important: image denoising and multilabel classification.} }
Endnote
%0 Conference Paper %T Spanning Tree Approximations for Conditional Random Fields %A Patrick Pletscher %A Cheng Soon Ong %A Joachim Buhmann %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-pletscher09a %I PMLR %P 408--415 %U https://proceedings.mlr.press/v5/pletscher09a.html %V 5 %X In this work we show that one can train Conditional Random Fields of intractable graphs effectively and efficiently by considering a mixture of random spanning trees of the underlying graphical model. Furthermore, we show how a maximum-likelihood estimator of such a training objective can subsequently be used for prediction on the full graph. We present experimental results which improve on the state-of-the-art. Additionally, the training objective is less sensitive to the regularization than pseudo-likelihood based training approaches. We perform the experimental validation on two classes of data sets where structure is important: image denoising and multilabel classification.
RIS
TY - CPAPER TI - Spanning Tree Approximations for Conditional Random Fields AU - Patrick Pletscher AU - Cheng Soon Ong AU - Joachim Buhmann BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-pletscher09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 408 EP - 415 L1 - http://proceedings.mlr.press/v5/pletscher09a/pletscher09a.pdf UR - https://proceedings.mlr.press/v5/pletscher09a.html AB - In this work we show that one can train Conditional Random Fields of intractable graphs effectively and efficiently by considering a mixture of random spanning trees of the underlying graphical model. Furthermore, we show how a maximum-likelihood estimator of such a training objective can subsequently be used for prediction on the full graph. We present experimental results which improve on the state-of-the-art. Additionally, the training objective is less sensitive to the regularization than pseudo-likelihood based training approaches. We perform the experimental validation on two classes of data sets where structure is important: image denoising and multilabel classification. ER -
APA
Pletscher, P., Ong, C.S. & Buhmann, J.. (2009). Spanning Tree Approximations for Conditional Random Fields. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:408-415 Available from https://proceedings.mlr.press/v5/pletscher09a.html.

Related Material