Deep Sigma Point Processes

Martin Jankowiak, Geoff Pleiss, Jacob Gardner
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:789-798, 2020.

Abstract

We introduce Deep Sigma Point Processes, a class of parametric models inspired by the compositional structure of Deep Gaussian Processes (DGPs). Deep Sigma Point Processes (DSPPs) retain many of the attractive features of (variational) DGPs, including mini-batch training and predictive uncertainty that is controlled by kernel basis functions. Importantly, since DSPPs admit a simple maximum likelihood inference procedure, the resulting predictive distributions are not degraded by any posterior approximations. In an extensive empirical comparison on univariate and multivariate regression tasks we find that the resulting predictive distributions are significantly better calibrated than those obtained with other probabilistic methods for scalable regression, including variational DGPs–often by as much as a nat per datapoint.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-jankowiak20a, title = {Deep Sigma Point Processes}, author = {Jankowiak, Martin and Pleiss, Geoff and Gardner, Jacob}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {789--798}, year = {2020}, editor = {Jonas Peters and David Sontag}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/jankowiak20a/jankowiak20a.pdf}, url = { http://proceedings.mlr.press/v124/jankowiak20a.html }, abstract = {We introduce Deep Sigma Point Processes, a class of parametric models inspired by the compositional structure of Deep Gaussian Processes (DGPs). Deep Sigma Point Processes (DSPPs) retain many of the attractive features of (variational) DGPs, including mini-batch training and predictive uncertainty that is controlled by kernel basis functions. Importantly, since DSPPs admit a simple maximum likelihood inference procedure, the resulting predictive distributions are not degraded by any posterior approximations. In an extensive empirical comparison on univariate and multivariate regression tasks we find that the resulting predictive distributions are significantly better calibrated than those obtained with other probabilistic methods for scalable regression, including variational DGPs–often by as much as a nat per datapoint.} }
Endnote
%0 Conference Paper %T Deep Sigma Point Processes %A Martin Jankowiak %A Geoff Pleiss %A Jacob Gardner %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-jankowiak20a %I PMLR %P 789--798 %U http://proceedings.mlr.press/v124/jankowiak20a.html %V 124 %X We introduce Deep Sigma Point Processes, a class of parametric models inspired by the compositional structure of Deep Gaussian Processes (DGPs). Deep Sigma Point Processes (DSPPs) retain many of the attractive features of (variational) DGPs, including mini-batch training and predictive uncertainty that is controlled by kernel basis functions. Importantly, since DSPPs admit a simple maximum likelihood inference procedure, the resulting predictive distributions are not degraded by any posterior approximations. In an extensive empirical comparison on univariate and multivariate regression tasks we find that the resulting predictive distributions are significantly better calibrated than those obtained with other probabilistic methods for scalable regression, including variational DGPs–often by as much as a nat per datapoint.
APA
Jankowiak, M., Pleiss, G. & Gardner, J.. (2020). Deep Sigma Point Processes. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:789-798 Available from http://proceedings.mlr.press/v124/jankowiak20a.html .

Related Material