Learning Thin Junction Trees via Graph Cuts

Shahaf Dafna, Carlos Guestrin
Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, PMLR 5:113-120, 2009.

Abstract

Structure learning algorithms usually focus on the compactness of the learned model. However, compact representation does not imply the existence of tractable inference algorithms: both exact and approximate inference may still be NP-hard. This focus on compactness leads to learning good models that require approximate inference techniques, thus reducing their prediction quality. In this paper, we propose a method for learning an attractive class of models: bounded treewidth junction trees. Those models permit both compact representation of probability distributions and efficient exact inference. Our method uses a new global criterion to construct the tree. Our criterion, based on a Bethe approximation of the likelihood, transforms the structure learning problem into an intuitive graph theoretical task. We present an efficient randomized algorithm with theoretical guarantees for finding good separators. We recursively apply this procedure to obtain a thin junction tree. Our extensive empirical evaluation demonstrates the benefit of applying exact inference using our models to answer queries. We also extend our technique to learn low tree-width conditional random fields, and demonstrate significant improvements over state of the art block-L1 regularization techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-dafna09a, title = {Learning Thin Junction Trees via Graph Cuts}, author = {Dafna, Shahaf and Guestrin, Carlos}, booktitle = {Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics}, pages = {113--120}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/dafna09a/dafna09a.pdf}, url = {https://proceedings.mlr.press/v5/dafna09a.html}, abstract = {Structure learning algorithms usually focus on the compactness of the learned model. However, compact representation does not imply the existence of tractable inference algorithms: both exact and approximate inference may still be NP-hard. This focus on compactness leads to learning good models that require approximate inference techniques, thus reducing their prediction quality. In this paper, we propose a method for learning an attractive class of models: bounded treewidth junction trees. Those models permit both compact representation of probability distributions and efficient exact inference. Our method uses a new global criterion to construct the tree. Our criterion, based on a Bethe approximation of the likelihood, transforms the structure learning problem into an intuitive graph theoretical task. We present an efficient randomized algorithm with theoretical guarantees for finding good separators. We recursively apply this procedure to obtain a thin junction tree. Our extensive empirical evaluation demonstrates the benefit of applying exact inference using our models to answer queries. We also extend our technique to learn low tree-width conditional random fields, and demonstrate significant improvements over state of the art block-L1 regularization techniques.} }
Endnote
%0 Conference Paper %T Learning Thin Junction Trees via Graph Cuts %A Shahaf Dafna %A Carlos Guestrin %B Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-dafna09a %I PMLR %P 113--120 %U https://proceedings.mlr.press/v5/dafna09a.html %V 5 %X Structure learning algorithms usually focus on the compactness of the learned model. However, compact representation does not imply the existence of tractable inference algorithms: both exact and approximate inference may still be NP-hard. This focus on compactness leads to learning good models that require approximate inference techniques, thus reducing their prediction quality. In this paper, we propose a method for learning an attractive class of models: bounded treewidth junction trees. Those models permit both compact representation of probability distributions and efficient exact inference. Our method uses a new global criterion to construct the tree. Our criterion, based on a Bethe approximation of the likelihood, transforms the structure learning problem into an intuitive graph theoretical task. We present an efficient randomized algorithm with theoretical guarantees for finding good separators. We recursively apply this procedure to obtain a thin junction tree. Our extensive empirical evaluation demonstrates the benefit of applying exact inference using our models to answer queries. We also extend our technique to learn low tree-width conditional random fields, and demonstrate significant improvements over state of the art block-L1 regularization techniques.
RIS
TY - CPAPER TI - Learning Thin Junction Trees via Graph Cuts AU - Shahaf Dafna AU - Carlos Guestrin BT - Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-dafna09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 113 EP - 120 L1 - http://proceedings.mlr.press/v5/dafna09a/dafna09a.pdf UR - https://proceedings.mlr.press/v5/dafna09a.html AB - Structure learning algorithms usually focus on the compactness of the learned model. However, compact representation does not imply the existence of tractable inference algorithms: both exact and approximate inference may still be NP-hard. This focus on compactness leads to learning good models that require approximate inference techniques, thus reducing their prediction quality. In this paper, we propose a method for learning an attractive class of models: bounded treewidth junction trees. Those models permit both compact representation of probability distributions and efficient exact inference. Our method uses a new global criterion to construct the tree. Our criterion, based on a Bethe approximation of the likelihood, transforms the structure learning problem into an intuitive graph theoretical task. We present an efficient randomized algorithm with theoretical guarantees for finding good separators. We recursively apply this procedure to obtain a thin junction tree. Our extensive empirical evaluation demonstrates the benefit of applying exact inference using our models to answer queries. We also extend our technique to learn low tree-width conditional random fields, and demonstrate significant improvements over state of the art block-L1 regularization techniques. ER -
APA
Dafna, S. & Guestrin, C.. (2009). Learning Thin Junction Trees via Graph Cuts. Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:113-120 Available from https://proceedings.mlr.press/v5/dafna09a.html.

Related Material