Guaranteed Scalable Learning of Latent Tree Models

Furong Huang, Niranjan Uma Naresh, Ioakeim Perros, Robert Chen, Jimeng Sun, Anima Anandkumar
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:883-893, 2020.

Abstract

We present an integrated approach to structure and parameter estimation in latent tree graphical models, where some nodes are hidden. Our overall approach follows a “divide-and-conquer” strategy that learns models over small groups of variables and iteratively merges into a global solution. The structure learning involves combinatorial operations such as minimum spanning tree construction and local recursive grouping; the parameter learning is based on the method of moments and on tensor decompositions. Our method is guaranteed to correctly recover the unknown tree structure and the model parameters with low sample complexity for the class of linear multivariate latent tree models which includes discrete and Gaussian distributions, and Gaussian mixtures. Our bulk asynchronous parallel algorithm is implemented in parallel and scales logarithmically with the number of variables and linearly with dimensionality of each variable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-huang20b, title = {Guaranteed Scalable Learning of Latent Tree Models}, author = {Huang, Furong and Naresh, Niranjan Uma and Perros, Ioakeim and Chen, Robert and Sun, Jimeng and Anandkumar, Anima}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {883--893}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/huang20b/huang20b.pdf}, url = {https://proceedings.mlr.press/v115/huang20b.html}, abstract = {We present an integrated approach to structure and parameter estimation in latent tree graphical models, where some nodes are hidden. Our overall approach follows a “divide-and-conquer” strategy that learns models over small groups of variables and iteratively merges into a global solution. The structure learning involves combinatorial operations such as minimum spanning tree construction and local recursive grouping; the parameter learning is based on the method of moments and on tensor decompositions. Our method is guaranteed to correctly recover the unknown tree structure and the model parameters with low sample complexity for the class of linear multivariate latent tree models which includes discrete and Gaussian distributions, and Gaussian mixtures. Our bulk asynchronous parallel algorithm is implemented in parallel and scales logarithmically with the number of variables and linearly with dimensionality of each variable. } }
Endnote
%0 Conference Paper %T Guaranteed Scalable Learning of Latent Tree Models %A Furong Huang %A Niranjan Uma Naresh %A Ioakeim Perros %A Robert Chen %A Jimeng Sun %A Anima Anandkumar %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-huang20b %I PMLR %P 883--893 %U https://proceedings.mlr.press/v115/huang20b.html %V 115 %X We present an integrated approach to structure and parameter estimation in latent tree graphical models, where some nodes are hidden. Our overall approach follows a “divide-and-conquer” strategy that learns models over small groups of variables and iteratively merges into a global solution. The structure learning involves combinatorial operations such as minimum spanning tree construction and local recursive grouping; the parameter learning is based on the method of moments and on tensor decompositions. Our method is guaranteed to correctly recover the unknown tree structure and the model parameters with low sample complexity for the class of linear multivariate latent tree models which includes discrete and Gaussian distributions, and Gaussian mixtures. Our bulk asynchronous parallel algorithm is implemented in parallel and scales logarithmically with the number of variables and linearly with dimensionality of each variable.
APA
Huang, F., Naresh, N.U., Perros, I., Chen, R., Sun, J. & Anandkumar, A.. (2020). Guaranteed Scalable Learning of Latent Tree Models. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:883-893 Available from https://proceedings.mlr.press/v115/huang20b.html.

Related Material