Ising Models with Latent Conditional Gaussian Variables

Frank Nussbaum, Joachim Giesen
Proceedings of the 30th International Conference on Algorithmic Learning Theory, PMLR 98:669-681, 2019.

Abstract

Ising models describe the joint probability distribution of a vector of binary feature variables. Typically, not all the variables interact with each other and one is interested in learning the presumably sparse network structure of the interacting variables. However, in the presence of latent variables, the conventional method of learning a sparse model might fail. This is because the latent variables induce indirect interactions of the observed variables. In the case of only a few latent conditional {Gaussian} variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model. Therefore, we propose to learn a sparse + low-rank decomposition of the parameters of an {Ising} model using a convex regularized likelihood problem. We show that the same problem can be obtained as the dual of a maximum-entropy problem with a new type of relaxation, where the sample means collectively need to match the expected values only up to a given tolerance. The solution to the convex optimization problem has consistency properties in the high-dimensional setting, where the number of observed binary variables and the number of latent conditional {Gaussian} variables are allowed to grow with the number of training samples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v98-nussbaum19a, title = {Ising Models with Latent Conditional {Gaussian} Variables}, author = {Nussbaum, Frank and Giesen, Joachim}, booktitle = {Proceedings of the 30th International Conference on Algorithmic Learning Theory}, pages = {669--681}, year = {2019}, editor = {Garivier, Aurélien and Kale, Satyen}, volume = {98}, series = {Proceedings of Machine Learning Research}, month = {22--24 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v98/nussbaum19a/nussbaum19a.pdf}, url = {https://proceedings.mlr.press/v98/nussbaum19a.html}, abstract = { Ising models describe the joint probability distribution of a vector of binary feature variables. Typically, not all the variables interact with each other and one is interested in learning the presumably sparse network structure of the interacting variables. However, in the presence of latent variables, the conventional method of learning a sparse model might fail. This is because the latent variables induce indirect interactions of the observed variables. In the case of only a few latent conditional {Gaussian} variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model. Therefore, we propose to learn a sparse + low-rank decomposition of the parameters of an {Ising} model using a convex regularized likelihood problem. We show that the same problem can be obtained as the dual of a maximum-entropy problem with a new type of relaxation, where the sample means collectively need to match the expected values only up to a given tolerance. The solution to the convex optimization problem has consistency properties in the high-dimensional setting, where the number of observed binary variables and the number of latent conditional {Gaussian} variables are allowed to grow with the number of training samples.} }
Endnote
%0 Conference Paper %T Ising Models with Latent Conditional Gaussian Variables %A Frank Nussbaum %A Joachim Giesen %B Proceedings of the 30th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Aurélien Garivier %E Satyen Kale %F pmlr-v98-nussbaum19a %I PMLR %P 669--681 %U https://proceedings.mlr.press/v98/nussbaum19a.html %V 98 %X Ising models describe the joint probability distribution of a vector of binary feature variables. Typically, not all the variables interact with each other and one is interested in learning the presumably sparse network structure of the interacting variables. However, in the presence of latent variables, the conventional method of learning a sparse model might fail. This is because the latent variables induce indirect interactions of the observed variables. In the case of only a few latent conditional {Gaussian} variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model. Therefore, we propose to learn a sparse + low-rank decomposition of the parameters of an {Ising} model using a convex regularized likelihood problem. We show that the same problem can be obtained as the dual of a maximum-entropy problem with a new type of relaxation, where the sample means collectively need to match the expected values only up to a given tolerance. The solution to the convex optimization problem has consistency properties in the high-dimensional setting, where the number of observed binary variables and the number of latent conditional {Gaussian} variables are allowed to grow with the number of training samples.
APA
Nussbaum, F. & Giesen, J.. (2019). Ising Models with Latent Conditional Gaussian Variables. Proceedings of the 30th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 98:669-681 Available from https://proceedings.mlr.press/v98/nussbaum19a.html.

Related Material