Scalable Training of Inference Networks for Gaussian-Process Models

Jiaxin Shi, Mohammad Emtiyaz Khan, Jun Zhu
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5758-5768, 2019.

Abstract

Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-shi19a, title = {Scalable Training of Inference Networks for {G}aussian-Process Models}, author = {Shi, Jiaxin and Khan, Mohammad Emtiyaz and Zhu, Jun}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5758--5768}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/shi19a/shi19a.pdf}, url = {https://proceedings.mlr.press/v97/shi19a.html}, abstract = {Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.} }
Endnote
%0 Conference Paper %T Scalable Training of Inference Networks for Gaussian-Process Models %A Jiaxin Shi %A Mohammad Emtiyaz Khan %A Jun Zhu %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-shi19a %I PMLR %P 5758--5768 %U https://proceedings.mlr.press/v97/shi19a.html %V 97 %X Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.
APA
Shi, J., Khan, M.E. & Zhu, J.. (2019). Scalable Training of Inference Networks for Gaussian-Process Models. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5758-5768 Available from https://proceedings.mlr.press/v97/shi19a.html.

Related Material