Learning Sparse Markov Network Structure via Ensemble-of-Trees Models

Yuanqing Lin, Shenghuo Zhu, Daniel Lee, Ben Taskar
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:360-367, 2009.

Abstract

Learning the sparse structure of a general Markov network is a hard problem. One of the main difficulties is the computation of its generally intractable partition function. To circumvent this difficulty, this paper proposes to learn the network structure using an ensemble-of-trees (ET) model. The ET model was first introduced by Meila and Jaakkola [1], and it approximates a Markov network using a mixture of all possible (super-exponentially many) spanning trees. The advantage of the ET model is that, although it needs to sum over super-exponentially many trees, its partition function as well as data likelihood can be computed in a closed form. Furthermore, since the ET model tends to represent a Markov network using as small number of trees as possible, it provides a natural regularization for finding a sparse network structure. Our simulation results show that the proposed ET approach is able to accurately recover the true Markov network connectivity and significantly outperform the state-of-art approaches for both discrete and continuous random variable networks. Furthermore, we also demonstrate the usage of the ET model for discovering the network of words from blog posts.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-lin09a, title = {Learning Sparse Markov Network Structure via Ensemble-of-Trees Models}, author = {Lin, Yuanqing and Zhu, Shenghuo and Lee, Daniel and Taskar, Ben}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {360--367}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/lin09a/lin09a.pdf}, url = {https://proceedings.mlr.press/v5/lin09a.html}, abstract = {Learning the sparse structure of a general Markov network is a hard problem. One of the main difficulties is the computation of its generally intractable partition function. To circumvent this difficulty, this paper proposes to learn the network structure using an ensemble-of-trees (ET) model. The ET model was first introduced by Meila and Jaakkola [1], and it approximates a Markov network using a mixture of all possible (super-exponentially many) spanning trees. The advantage of the ET model is that, although it needs to sum over super-exponentially many trees, its partition function as well as data likelihood can be computed in a closed form. Furthermore, since the ET model tends to represent a Markov network using as small number of trees as possible, it provides a natural regularization for finding a sparse network structure. Our simulation results show that the proposed ET approach is able to accurately recover the true Markov network connectivity and significantly outperform the state-of-art approaches for both discrete and continuous random variable networks. Furthermore, we also demonstrate the usage of the ET model for discovering the network of words from blog posts.} }
Endnote
%0 Conference Paper %T Learning Sparse Markov Network Structure via Ensemble-of-Trees Models %A Yuanqing Lin %A Shenghuo Zhu %A Daniel Lee %A Ben Taskar %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-lin09a %I PMLR %P 360--367 %U https://proceedings.mlr.press/v5/lin09a.html %V 5 %X Learning the sparse structure of a general Markov network is a hard problem. One of the main difficulties is the computation of its generally intractable partition function. To circumvent this difficulty, this paper proposes to learn the network structure using an ensemble-of-trees (ET) model. The ET model was first introduced by Meila and Jaakkola [1], and it approximates a Markov network using a mixture of all possible (super-exponentially many) spanning trees. The advantage of the ET model is that, although it needs to sum over super-exponentially many trees, its partition function as well as data likelihood can be computed in a closed form. Furthermore, since the ET model tends to represent a Markov network using as small number of trees as possible, it provides a natural regularization for finding a sparse network structure. Our simulation results show that the proposed ET approach is able to accurately recover the true Markov network connectivity and significantly outperform the state-of-art approaches for both discrete and continuous random variable networks. Furthermore, we also demonstrate the usage of the ET model for discovering the network of words from blog posts.
RIS
TY - CPAPER TI - Learning Sparse Markov Network Structure via Ensemble-of-Trees Models AU - Yuanqing Lin AU - Shenghuo Zhu AU - Daniel Lee AU - Ben Taskar BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-lin09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 360 EP - 367 L1 - http://proceedings.mlr.press/v5/lin09a/lin09a.pdf UR - https://proceedings.mlr.press/v5/lin09a.html AB - Learning the sparse structure of a general Markov network is a hard problem. One of the main difficulties is the computation of its generally intractable partition function. To circumvent this difficulty, this paper proposes to learn the network structure using an ensemble-of-trees (ET) model. The ET model was first introduced by Meila and Jaakkola [1], and it approximates a Markov network using a mixture of all possible (super-exponentially many) spanning trees. The advantage of the ET model is that, although it needs to sum over super-exponentially many trees, its partition function as well as data likelihood can be computed in a closed form. Furthermore, since the ET model tends to represent a Markov network using as small number of trees as possible, it provides a natural regularization for finding a sparse network structure. Our simulation results show that the proposed ET approach is able to accurately recover the true Markov network connectivity and significantly outperform the state-of-art approaches for both discrete and continuous random variable networks. Furthermore, we also demonstrate the usage of the ET model for discovering the network of words from blog posts. ER -
APA
Lin, Y., Zhu, S., Lee, D. & Taskar, B.. (2009). Learning Sparse Markov Network Structure via Ensemble-of-Trees Models. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:360-367 Available from https://proceedings.mlr.press/v5/lin09a.html.

Related Material