Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models

Mohammad Emtiyaz Khan, Aleksandr Aravkin, Michael Friedlander, Matthias Seeger
; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):951-959, 2013.

Abstract

Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-emtiyazkhan13, title = {Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models}, author = {Mohammad Emtiyaz Khan and Aleksandr Aravkin and Michael Friedlander and Matthias Seeger}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {951--959}, year = {2013}, editor = {Sanjoy Dasgupta and David McAllester}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/emtiyazkhan13.pdf}, url = {http://proceedings.mlr.press/v28/emtiyazkhan13.html}, abstract = {Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields. } }
Endnote
%0 Conference Paper %T Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models %A Mohammad Emtiyaz Khan %A Aleksandr Aravkin %A Michael Friedlander %A Matthias Seeger %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-emtiyazkhan13 %I PMLR %J Proceedings of Machine Learning Research %P 951--959 %U http://proceedings.mlr.press %V 28 %N 3 %W PMLR %X Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields.
RIS
TY - CPAPER TI - Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models AU - Mohammad Emtiyaz Khan AU - Aleksandr Aravkin AU - Michael Friedlander AU - Matthias Seeger BT - Proceedings of the 30th International Conference on Machine Learning PY - 2013/02/13 DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-emtiyazkhan13 PB - PMLR SP - 951 DP - PMLR EP - 959 L1 - http://proceedings.mlr.press/v28/emtiyazkhan13.pdf UR - http://proceedings.mlr.press/v28/emtiyazkhan13.html AB - Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields. ER -
APA
Emtiyaz Khan, M., Aravkin, A., Friedlander, M. & Seeger, M.. (2013). Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models. Proceedings of the 30th International Conference on Machine Learning, in PMLR 28(3):951-959

Related Material