Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes

Ba-Hien Tran, Babak Shahbaba, Stephan Mandt, Maurizio Filippone
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:34409-34430, 2023.

Abstract

We present a fully Bayesian autoencoder model that treats both local latent variables and global decoder parameters in a Bayesian fashion. This approach allows for flexible priors and posterior approximations while keeping the inference costs low. To achieve this, we introduce an amortized MCMC approach by utilizing an implicit stochastic network to learn sampling from the posterior over local latent variables. Furthermore, we extend the model by incorporating a Sparse Gaussian Process prior over the latent space, allowing for a fully Bayesian treatment of inducing points and kernel hyperparameters and leading to improved scalability. Additionally, we enable Deep Gaussian Process priors on the latent space and the handling of missing data. We evaluate our model on a range of experiments focusing on dynamic representation learning and generative modeling, demonstrating the strong performance of our approach in comparison to existing methods that combine Gaussian Processes and autoencoders.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-tran23a, title = {Fully {B}ayesian Autoencoders with Latent Sparse {G}aussian Processes}, author = {Tran, Ba-Hien and Shahbaba, Babak and Mandt, Stephan and Filippone, Maurizio}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {34409--34430}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/tran23a/tran23a.pdf}, url = {https://proceedings.mlr.press/v202/tran23a.html}, abstract = {We present a fully Bayesian autoencoder model that treats both local latent variables and global decoder parameters in a Bayesian fashion. This approach allows for flexible priors and posterior approximations while keeping the inference costs low. To achieve this, we introduce an amortized MCMC approach by utilizing an implicit stochastic network to learn sampling from the posterior over local latent variables. Furthermore, we extend the model by incorporating a Sparse Gaussian Process prior over the latent space, allowing for a fully Bayesian treatment of inducing points and kernel hyperparameters and leading to improved scalability. Additionally, we enable Deep Gaussian Process priors on the latent space and the handling of missing data. We evaluate our model on a range of experiments focusing on dynamic representation learning and generative modeling, demonstrating the strong performance of our approach in comparison to existing methods that combine Gaussian Processes and autoencoders.} }
Endnote
%0 Conference Paper %T Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes %A Ba-Hien Tran %A Babak Shahbaba %A Stephan Mandt %A Maurizio Filippone %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-tran23a %I PMLR %P 34409--34430 %U https://proceedings.mlr.press/v202/tran23a.html %V 202 %X We present a fully Bayesian autoencoder model that treats both local latent variables and global decoder parameters in a Bayesian fashion. This approach allows for flexible priors and posterior approximations while keeping the inference costs low. To achieve this, we introduce an amortized MCMC approach by utilizing an implicit stochastic network to learn sampling from the posterior over local latent variables. Furthermore, we extend the model by incorporating a Sparse Gaussian Process prior over the latent space, allowing for a fully Bayesian treatment of inducing points and kernel hyperparameters and leading to improved scalability. Additionally, we enable Deep Gaussian Process priors on the latent space and the handling of missing data. We evaluate our model on a range of experiments focusing on dynamic representation learning and generative modeling, demonstrating the strong performance of our approach in comparison to existing methods that combine Gaussian Processes and autoencoders.
APA
Tran, B., Shahbaba, B., Mandt, S. & Filippone, M.. (2023). Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:34409-34430 Available from https://proceedings.mlr.press/v202/tran23a.html.

Related Material