Bayesian Basis Function Approximation for Scalable Gaussian Process Priors in Deep Generative Models

Mehmet Yiğit Balık, Maksim Sinelnikov, Priscilla Ong, Harri Lähdesmäki
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:2673-2696, 2025.

Abstract

High-dimensional time-series datasets are common in domains such as healthcare and economics. Variational autoencoder (VAE) models, where latent variables are modeled with a Gaussian process (GP) prior, have become a prominent model class to analyze such correlated datasets. However, their applications are challenged by the inherent cubic time complexity that requires specific GP approximation techniques, as well as the general challenge of modeling both shared and individual-specific correlations across time. Though inducing points enhance GP prior VAE scalability, optimizing them remains challenging, especially since discrete covariates resist gradient-based methods. In this work, we propose a scalable basis function approximation technique for GP prior VAEs that mitigates these challenges and results in linear time complexity, with a global parametrization that eliminates the need for amortized variational inference and the associated amortization gap, making it well-suited for conditional generation tasks where accuracy and efficiency are crucial. Empirical evaluations on synthetic and real-world benchmark datasets demonstrate that our approach not only improves scalability and interpretability but also drastically enhances predictive performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-balik25a, title = {{B}ayesian Basis Function Approximation for Scalable {G}aussian Process Priors in Deep Generative Models}, author = {Bal{\i}k, Mehmet Yi\u{g}it and Sinelnikov, Maksim and Ong, Priscilla and L\"{a}hdesm\"{a}ki, Harri}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {2673--2696}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/balik25a/balik25a.pdf}, url = {https://proceedings.mlr.press/v267/balik25a.html}, abstract = {High-dimensional time-series datasets are common in domains such as healthcare and economics. Variational autoencoder (VAE) models, where latent variables are modeled with a Gaussian process (GP) prior, have become a prominent model class to analyze such correlated datasets. However, their applications are challenged by the inherent cubic time complexity that requires specific GP approximation techniques, as well as the general challenge of modeling both shared and individual-specific correlations across time. Though inducing points enhance GP prior VAE scalability, optimizing them remains challenging, especially since discrete covariates resist gradient-based methods. In this work, we propose a scalable basis function approximation technique for GP prior VAEs that mitigates these challenges and results in linear time complexity, with a global parametrization that eliminates the need for amortized variational inference and the associated amortization gap, making it well-suited for conditional generation tasks where accuracy and efficiency are crucial. Empirical evaluations on synthetic and real-world benchmark datasets demonstrate that our approach not only improves scalability and interpretability but also drastically enhances predictive performance.} }
Endnote
%0 Conference Paper %T Bayesian Basis Function Approximation for Scalable Gaussian Process Priors in Deep Generative Models %A Mehmet Yiğit Balık %A Maksim Sinelnikov %A Priscilla Ong %A Harri Lähdesmäki %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-balik25a %I PMLR %P 2673--2696 %U https://proceedings.mlr.press/v267/balik25a.html %V 267 %X High-dimensional time-series datasets are common in domains such as healthcare and economics. Variational autoencoder (VAE) models, where latent variables are modeled with a Gaussian process (GP) prior, have become a prominent model class to analyze such correlated datasets. However, their applications are challenged by the inherent cubic time complexity that requires specific GP approximation techniques, as well as the general challenge of modeling both shared and individual-specific correlations across time. Though inducing points enhance GP prior VAE scalability, optimizing them remains challenging, especially since discrete covariates resist gradient-based methods. In this work, we propose a scalable basis function approximation technique for GP prior VAEs that mitigates these challenges and results in linear time complexity, with a global parametrization that eliminates the need for amortized variational inference and the associated amortization gap, making it well-suited for conditional generation tasks where accuracy and efficiency are crucial. Empirical evaluations on synthetic and real-world benchmark datasets demonstrate that our approach not only improves scalability and interpretability but also drastically enhances predictive performance.
APA
Balık, M.Y., Sinelnikov, M., Ong, P. & Lähdesmäki, H.. (2025). Bayesian Basis Function Approximation for Scalable Gaussian Process Priors in Deep Generative Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:2673-2696 Available from https://proceedings.mlr.press/v267/balik25a.html.

Related Material