Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models

Mohammad Emtiyaz Khan, Aleksandr Aravkin, Michael Friedlander, Matthias Seeger
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):951-959, 2013.

Abstract

Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-emtiyazkhan13, title = {Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models}, author = {Emtiyaz Khan, Mohammad and Aravkin, Aleksandr and Friedlander, Michael and Seeger, Matthias}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {951--959}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/emtiyazkhan13.pdf}, url = {https://proceedings.mlr.press/v28/emtiyazkhan13.html}, abstract = {Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields. } }
Endnote
%0 Conference Paper %T Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models %A Mohammad Emtiyaz Khan %A Aleksandr Aravkin %A Michael Friedlander %A Matthias Seeger %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-emtiyazkhan13 %I PMLR %P 951--959 %U https://proceedings.mlr.press/v28/emtiyazkhan13.html %V 28 %N 3 %X Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields.
RIS
TY - CPAPER TI - Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models AU - Mohammad Emtiyaz Khan AU - Aleksandr Aravkin AU - Michael Friedlander AU - Matthias Seeger BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-emtiyazkhan13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 951 EP - 959 L1 - http://proceedings.mlr.press/v28/emtiyazkhan13.pdf UR - https://proceedings.mlr.press/v28/emtiyazkhan13.html AB - Latent Gaussian models (LGMs) are widely used in statistics and machine learning. Bayesian inference in non-conjugate LGM is difficult due to intractable integrals involving the Gaussian prior and non-conjugate likelihoods. Algorithms based on Variational Gaussian (VG) approximations are widely employed since they strike a favorable balance between accuracy, generality, speed, and ease of use. However, the structure of optimization problems associated with them remains poorly understood, and standard solvers take too long to converge. In this paper, we derive a novel dual variational inference approach, which exploits the convexity property of the VG approximations. The implications of our approach is that we obtain an algorithm that solves a convex optimization problem, reduces the number of variational parameters, and converges much faster than previous methods. Using real world data, we demonstrate these advantages on a variety of LGMs including Gaussian process classification and latent Gaussian Markov random fields. ER -
APA
Emtiyaz Khan, M., Aravkin, A., Friedlander, M. & Seeger, M.. (2013). Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):951-959 Available from https://proceedings.mlr.press/v28/emtiyazkhan13.html.

Related Material