Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations

Simone Rossi, Markus Heinonen, Edwin Bonilla, Zheyang Shen, Maurizio Filippone
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:1837-1845, 2021.

Abstract

Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models. Besides enabling scalability, one of their main advantages over sparse approximations using direct marginal likelihood maximization is that they provide a robust alternative for point estimation of the inducing inputs, i.e. the location of the inducing variables. In this work we challenge the common wisdom that optimizing the inducing inputs in the variational framework yields optimal performance. We show that, by revisiting old model approximations such as the fully-independent training conditionals endowed with powerful sampling-based inference methods, treating both inducing locations and GP hyper-parameters in a Bayesian way can improve performance significantly. Based on stochastic gradient Hamiltonian Monte Carlo, we develop a fully Bayesian approach to scalable GP and deep GP models, and demonstrate its state-of-the-art performance through an extensive experimental campaign across several regression and classification problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-rossi21a, title = { Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations }, author = {Rossi, Simone and Heinonen, Markus and Bonilla, Edwin and Shen, Zheyang and Filippone, Maurizio}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {1837--1845}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/rossi21a/rossi21a.pdf}, url = {https://proceedings.mlr.press/v130/rossi21a.html}, abstract = { Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models. Besides enabling scalability, one of their main advantages over sparse approximations using direct marginal likelihood maximization is that they provide a robust alternative for point estimation of the inducing inputs, i.e. the location of the inducing variables. In this work we challenge the common wisdom that optimizing the inducing inputs in the variational framework yields optimal performance. We show that, by revisiting old model approximations such as the fully-independent training conditionals endowed with powerful sampling-based inference methods, treating both inducing locations and GP hyper-parameters in a Bayesian way can improve performance significantly. Based on stochastic gradient Hamiltonian Monte Carlo, we develop a fully Bayesian approach to scalable GP and deep GP models, and demonstrate its state-of-the-art performance through an extensive experimental campaign across several regression and classification problems. } }
Endnote
%0 Conference Paper %T Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations %A Simone Rossi %A Markus Heinonen %A Edwin Bonilla %A Zheyang Shen %A Maurizio Filippone %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-rossi21a %I PMLR %P 1837--1845 %U https://proceedings.mlr.press/v130/rossi21a.html %V 130 %X Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models. Besides enabling scalability, one of their main advantages over sparse approximations using direct marginal likelihood maximization is that they provide a robust alternative for point estimation of the inducing inputs, i.e. the location of the inducing variables. In this work we challenge the common wisdom that optimizing the inducing inputs in the variational framework yields optimal performance. We show that, by revisiting old model approximations such as the fully-independent training conditionals endowed with powerful sampling-based inference methods, treating both inducing locations and GP hyper-parameters in a Bayesian way can improve performance significantly. Based on stochastic gradient Hamiltonian Monte Carlo, we develop a fully Bayesian approach to scalable GP and deep GP models, and demonstrate its state-of-the-art performance through an extensive experimental campaign across several regression and classification problems.
APA
Rossi, S., Heinonen, M., Bonilla, E., Shen, Z. & Filippone, M.. (2021). Sparse Gaussian Processes Revisited: Bayesian Approaches to Inducing-Variable Approximations . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:1837-1845 Available from https://proceedings.mlr.press/v130/rossi21a.html.

Related Material