Generalised GPLVM with Stochastic Variational Inference

Vidhi Lalchand, Aditya Ravuri, Neil D. Lawrence
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7841-7864, 2022.

Abstract

Gaussian process latent variable models (GPLVM) are a flexible and non-linear approach to dimensionality reduction, extending classical Gaussian processes to an unsupervised learning context. The Bayesian incarnation of the GPLVM uses a variational framework, where the posterior over latent variables is approximated by a well-behaved variational family, a factorised Gaussian yielding a tractable lower bound. However, the non-factorisability of the lower bound prevents truly scalable inference. In this work, we study the doubly stochastic formulation of the Bayesian GPLVM model amenable with minibatch training. We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models. Further, we demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions. We demonstrate the model’s performance by benchmarking against the canonical sparse GPLVM for high dimensional data examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-lalchand22a, title = { Generalised GPLVM with Stochastic Variational Inference }, author = {Lalchand, Vidhi and Ravuri, Aditya and Lawrence, Neil D.}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7841--7864}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/lalchand22a/lalchand22a.pdf}, url = {https://proceedings.mlr.press/v151/lalchand22a.html}, abstract = { Gaussian process latent variable models (GPLVM) are a flexible and non-linear approach to dimensionality reduction, extending classical Gaussian processes to an unsupervised learning context. The Bayesian incarnation of the GPLVM uses a variational framework, where the posterior over latent variables is approximated by a well-behaved variational family, a factorised Gaussian yielding a tractable lower bound. However, the non-factorisability of the lower bound prevents truly scalable inference. In this work, we study the doubly stochastic formulation of the Bayesian GPLVM model amenable with minibatch training. We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models. Further, we demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions. We demonstrate the model’s performance by benchmarking against the canonical sparse GPLVM for high dimensional data examples. } }
Endnote
%0 Conference Paper %T Generalised GPLVM with Stochastic Variational Inference %A Vidhi Lalchand %A Aditya Ravuri %A Neil D. Lawrence %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-lalchand22a %I PMLR %P 7841--7864 %U https://proceedings.mlr.press/v151/lalchand22a.html %V 151 %X Gaussian process latent variable models (GPLVM) are a flexible and non-linear approach to dimensionality reduction, extending classical Gaussian processes to an unsupervised learning context. The Bayesian incarnation of the GPLVM uses a variational framework, where the posterior over latent variables is approximated by a well-behaved variational family, a factorised Gaussian yielding a tractable lower bound. However, the non-factorisability of the lower bound prevents truly scalable inference. In this work, we study the doubly stochastic formulation of the Bayesian GPLVM model amenable with minibatch training. We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models. Further, we demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions. We demonstrate the model’s performance by benchmarking against the canonical sparse GPLVM for high dimensional data examples.
APA
Lalchand, V., Ravuri, A. & Lawrence, N.D.. (2022). Generalised GPLVM with Stochastic Variational Inference . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7841-7864 Available from https://proceedings.mlr.press/v151/lalchand22a.html.

Related Material