A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models

Rishit Sheth, Roni Khardon
; Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:761-769, 2016.

Abstract

Latent Gaussian Models (LGM) provide a rich modeling framework with general inference procedures. The variational approximation offers an effective solution for such models and has attracted a significant amount of interest. Recent work proposed a fixed-point (FP) update procedure to optimize the covariance matrix in the variational solution and demonstrated its efficacy in specific models. The paper makes three contributions. First, it shows that the same approach can be used more generally in extensions of LGM. Second, it provides an analysis identifying conditions for the convergence of the FP method. Third, it provides an extensive experimental evaluation in Gaussian processes, sparse Gaussian processes, and generalized linear models, with several non-conjugate observation likelihoods, showing wide applicability of the FP method and a significant advantage over gradient based optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-sheth16, title = {A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models}, author = {Rishit Sheth and Roni Khardon}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {761--769}, year = {2016}, editor = {Arthur Gretton and Christian C. Robert}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/sheth16.pdf}, url = {http://proceedings.mlr.press/v51/sheth16.html}, abstract = {Latent Gaussian Models (LGM) provide a rich modeling framework with general inference procedures. The variational approximation offers an effective solution for such models and has attracted a significant amount of interest. Recent work proposed a fixed-point (FP) update procedure to optimize the covariance matrix in the variational solution and demonstrated its efficacy in specific models. The paper makes three contributions. First, it shows that the same approach can be used more generally in extensions of LGM. Second, it provides an analysis identifying conditions for the convergence of the FP method. Third, it provides an extensive experimental evaluation in Gaussian processes, sparse Gaussian processes, and generalized linear models, with several non-conjugate observation likelihoods, showing wide applicability of the FP method and a significant advantage over gradient based optimization.} }
Endnote
%0 Conference Paper %T A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models %A Rishit Sheth %A Roni Khardon %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-sheth16 %I PMLR %J Proceedings of Machine Learning Research %P 761--769 %U http://proceedings.mlr.press %V 51 %W PMLR %X Latent Gaussian Models (LGM) provide a rich modeling framework with general inference procedures. The variational approximation offers an effective solution for such models and has attracted a significant amount of interest. Recent work proposed a fixed-point (FP) update procedure to optimize the covariance matrix in the variational solution and demonstrated its efficacy in specific models. The paper makes three contributions. First, it shows that the same approach can be used more generally in extensions of LGM. Second, it provides an analysis identifying conditions for the convergence of the FP method. Third, it provides an extensive experimental evaluation in Gaussian processes, sparse Gaussian processes, and generalized linear models, with several non-conjugate observation likelihoods, showing wide applicability of the FP method and a significant advantage over gradient based optimization.
RIS
TY - CPAPER TI - A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models AU - Rishit Sheth AU - Roni Khardon BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics PY - 2016/05/02 DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-sheth16 PB - PMLR SP - 761 DP - PMLR EP - 769 L1 - http://proceedings.mlr.press/v51/sheth16.pdf UR - http://proceedings.mlr.press/v51/sheth16.html AB - Latent Gaussian Models (LGM) provide a rich modeling framework with general inference procedures. The variational approximation offers an effective solution for such models and has attracted a significant amount of interest. Recent work proposed a fixed-point (FP) update procedure to optimize the covariance matrix in the variational solution and demonstrated its efficacy in specific models. The paper makes three contributions. First, it shows that the same approach can be used more generally in extensions of LGM. Second, it provides an analysis identifying conditions for the convergence of the FP method. Third, it provides an extensive experimental evaluation in Gaussian processes, sparse Gaussian processes, and generalized linear models, with several non-conjugate observation likelihoods, showing wide applicability of the FP method and a significant advantage over gradient based optimization. ER -
APA
Sheth, R. & Khardon, R.. (2016). A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in PMLR 51:761-769

Related Material