Variational Learning of Inducing Variables in Sparse Gaussian Processes

Michalis Titsias
Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:567-574, 2009.

Abstract

Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparameters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood. The key property of this formulation is that the inducing inputs are defined to be variational parameters which are selected by minimizing the Kullback-Leibler divergence between the variational distribution and the exact posterior distribution over the latent function values. We apply this technique to regression and we compare it with other approaches in the literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-titsias09a, title = {Variational Learning of Inducing Variables in Sparse Gaussian Processes}, author = {Titsias, Michalis}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {567--574}, year = {2009}, editor = {van Dyk, David and Welling, Max}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf}, url = {https://proceedings.mlr.press/v5/titsias09a.html}, abstract = {Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparameters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood. The key property of this formulation is that the inducing inputs are defined to be variational parameters which are selected by minimizing the Kullback-Leibler divergence between the variational distribution and the exact posterior distribution over the latent function values. We apply this technique to regression and we compare it with other approaches in the literature.} }
Endnote
%0 Conference Paper %T Variational Learning of Inducing Variables in Sparse Gaussian Processes %A Michalis Titsias %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-titsias09a %I PMLR %P 567--574 %U https://proceedings.mlr.press/v5/titsias09a.html %V 5 %X Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparameters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood. The key property of this formulation is that the inducing inputs are defined to be variational parameters which are selected by minimizing the Kullback-Leibler divergence between the variational distribution and the exact posterior distribution over the latent function values. We apply this technique to regression and we compare it with other approaches in the literature.
RIS
TY - CPAPER TI - Variational Learning of Inducing Variables in Sparse Gaussian Processes AU - Michalis Titsias BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-titsias09a PB - PMLR DP - Proceedings of Machine Learning Research VL - 5 SP - 567 EP - 574 L1 - http://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf UR - https://proceedings.mlr.press/v5/titsias09a.html AB - Sparse Gaussian process methods that use inducing variables require the selection of the inducing inputs and the kernel hyperparameters. We introduce a variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood. The key property of this formulation is that the inducing inputs are defined to be variational parameters which are selected by minimizing the Kullback-Leibler divergence between the variational distribution and the exact posterior distribution over the latent function values. We apply this technique to regression and we compare it with other approaches in the literature. ER -
APA
Titsias, M.. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 5:567-574 Available from https://proceedings.mlr.press/v5/titsias09a.html.

Related Material