Learning Ising and Potts Models with Latent Variables

Surbhi Goel
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3557-3566, 2020.

Abstract

We study the problem of learning graphical models with latent variables. We give the {\em first} efficient algorithms for learning: 1) ferromagnetic Ising models with latent variables under {\em arbitrary} external fields, and 2) ferromagnetic Potts model with latent variables under unidirectional non-negative external field. Our algorithms have optimal dependence on the dimension but suffer from a sub-optimal dependence on the underlying sparsity of the graph.Our results rely on two structural properties of the underlying graphical models. These in turn allow us to design an influence function which can be maximized greedily to recover the structure of the underlying graphical model. These structural results may be of independent interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-goel20a, title = {Learning Ising and Potts Models with Latent Variables}, author = {Goel, Surbhi}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3557--3566}, year = {2020}, editor = {Silvia Chiappa and Roberto Calandra}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/goel20a/goel20a.pdf}, url = { http://proceedings.mlr.press/v108/goel20a.html }, abstract = {We study the problem of learning graphical models with latent variables. We give the {\em first} efficient algorithms for learning: 1) ferromagnetic Ising models with latent variables under {\em arbitrary} external fields, and 2) ferromagnetic Potts model with latent variables under unidirectional non-negative external field. Our algorithms have optimal dependence on the dimension but suffer from a sub-optimal dependence on the underlying sparsity of the graph.Our results rely on two structural properties of the underlying graphical models. These in turn allow us to design an influence function which can be maximized greedily to recover the structure of the underlying graphical model. These structural results may be of independent interest.} }
Endnote
%0 Conference Paper %T Learning Ising and Potts Models with Latent Variables %A Surbhi Goel %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-goel20a %I PMLR %P 3557--3566 %U http://proceedings.mlr.press/v108/goel20a.html %V 108 %X We study the problem of learning graphical models with latent variables. We give the {\em first} efficient algorithms for learning: 1) ferromagnetic Ising models with latent variables under {\em arbitrary} external fields, and 2) ferromagnetic Potts model with latent variables under unidirectional non-negative external field. Our algorithms have optimal dependence on the dimension but suffer from a sub-optimal dependence on the underlying sparsity of the graph.Our results rely on two structural properties of the underlying graphical models. These in turn allow us to design an influence function which can be maximized greedily to recover the structure of the underlying graphical model. These structural results may be of independent interest.
APA
Goel, S.. (2020). Learning Ising and Potts Models with Latent Variables. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3557-3566 Available from http://proceedings.mlr.press/v108/goel20a.html .

Related Material