Subset-of-data variational inference for deep Gaussian-processes regression

Ayush Jain, P. K. Srijith, Mohammad Emtiyaz Khan
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1362-1370, 2021.

Abstract

Deep Gaussian Processes (DGPs) are multi-layer, flexible extensions of Gaussian Processes but their training remains challenging. Most existing methods for inference in DGPs use sparse approximation which require optimization over a large number of inducing inputs and their locations across layers. In this paper, we simplify the training by setting the locations to a fixed subset of data and sampling the inducing inputs from a variational distribution. This reduces the trainable parameters and computation cost without any performance degradation, as demonstrated by our empirical results on regression data sets. Our modifications simplify and stabilize DGP training methods while making them amenable to sampling schemes such as leverage score and determinantal point processes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-jain21a, title = {Subset-of-data variational inference for deep Gaussian-processes regression}, author = {Jain, Ayush and Srijith, P. K. and Khan, Mohammad Emtiyaz}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1362--1370}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/jain21a/jain21a.pdf}, url = {https://proceedings.mlr.press/v161/jain21a.html}, abstract = {Deep Gaussian Processes (DGPs) are multi-layer, flexible extensions of Gaussian Processes but their training remains challenging. Most existing methods for inference in DGPs use sparse approximation which require optimization over a large number of inducing inputs and their locations across layers. In this paper, we simplify the training by setting the locations to a fixed subset of data and sampling the inducing inputs from a variational distribution. This reduces the trainable parameters and computation cost without any performance degradation, as demonstrated by our empirical results on regression data sets. Our modifications simplify and stabilize DGP training methods while making them amenable to sampling schemes such as leverage score and determinantal point processes.} }
Endnote
%0 Conference Paper %T Subset-of-data variational inference for deep Gaussian-processes regression %A Ayush Jain %A P. K. Srijith %A Mohammad Emtiyaz Khan %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-jain21a %I PMLR %P 1362--1370 %U https://proceedings.mlr.press/v161/jain21a.html %V 161 %X Deep Gaussian Processes (DGPs) are multi-layer, flexible extensions of Gaussian Processes but their training remains challenging. Most existing methods for inference in DGPs use sparse approximation which require optimization over a large number of inducing inputs and their locations across layers. In this paper, we simplify the training by setting the locations to a fixed subset of data and sampling the inducing inputs from a variational distribution. This reduces the trainable parameters and computation cost without any performance degradation, as demonstrated by our empirical results on regression data sets. Our modifications simplify and stabilize DGP training methods while making them amenable to sampling schemes such as leverage score and determinantal point processes.
APA
Jain, A., Srijith, P.K. & Khan, M.E.. (2021). Subset-of-data variational inference for deep Gaussian-processes regression. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1362-1370 Available from https://proceedings.mlr.press/v161/jain21a.html.

Related Material