Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling

Jian Xu, Shian Du, Junmei Yang, Qianli Ma, Delu Zeng, John Paisley
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:4663-4680, 2025.

Abstract

Gaussian Process Latent Variable Models (GPLVMs) have become increasingly popular for unsupervised tasks such as dimensionality reduction and missing data recovery due to their flexibility and non-linear nature. An importance-weighted version of the Bayesian GPLVMs has been proposed to obtain a tighter variational bound. However, this version of the approach is primarily limited to analyzing simple data structures, as the generation of an effective proposal distribution can become quite challenging in high-dimensional spaces or with complex data sets. In this work, we propose VAIS-GPLVM, a variational Annealed Importance Sampling method that leverages time-inhomogeneous unadjusted Langevin dynamics to construct the variational posterior. By transforming the posterior into a sequence of intermediate distributions using annealing, we combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution. We further propose an efficient algorithm by reparameterizing all variables in the evidence lower bound (ELBO). Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-xu25a, title = {Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling}, author = {Xu, Jian and Du, Shian and Yang, Junmei and Ma, Qianli and Zeng, Delu and Paisley, John}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {4663--4680}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/xu25a/xu25a.pdf}, url = {https://proceedings.mlr.press/v286/xu25a.html}, abstract = {Gaussian Process Latent Variable Models (GPLVMs) have become increasingly popular for unsupervised tasks such as dimensionality reduction and missing data recovery due to their flexibility and non-linear nature. An importance-weighted version of the Bayesian GPLVMs has been proposed to obtain a tighter variational bound. However, this version of the approach is primarily limited to analyzing simple data structures, as the generation of an effective proposal distribution can become quite challenging in high-dimensional spaces or with complex data sets. In this work, we propose VAIS-GPLVM, a variational Annealed Importance Sampling method that leverages time-inhomogeneous unadjusted Langevin dynamics to construct the variational posterior. By transforming the posterior into a sequence of intermediate distributions using annealing, we combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution. We further propose an efficient algorithm by reparameterizing all variables in the evidence lower bound (ELBO). Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.} }
Endnote
%0 Conference Paper %T Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling %A Jian Xu %A Shian Du %A Junmei Yang %A Qianli Ma %A Delu Zeng %A John Paisley %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-xu25a %I PMLR %P 4663--4680 %U https://proceedings.mlr.press/v286/xu25a.html %V 286 %X Gaussian Process Latent Variable Models (GPLVMs) have become increasingly popular for unsupervised tasks such as dimensionality reduction and missing data recovery due to their flexibility and non-linear nature. An importance-weighted version of the Bayesian GPLVMs has been proposed to obtain a tighter variational bound. However, this version of the approach is primarily limited to analyzing simple data structures, as the generation of an effective proposal distribution can become quite challenging in high-dimensional spaces or with complex data sets. In this work, we propose VAIS-GPLVM, a variational Annealed Importance Sampling method that leverages time-inhomogeneous unadjusted Langevin dynamics to construct the variational posterior. By transforming the posterior into a sequence of intermediate distributions using annealing, we combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution. We further propose an efficient algorithm by reparameterizing all variables in the evidence lower bound (ELBO). Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
APA
Xu, J., Du, S., Yang, J., Ma, Q., Zeng, D. & Paisley, J.. (2025). Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:4663-4680 Available from https://proceedings.mlr.press/v286/xu25a.html.

Related Material