Function-space Inference with Sparse Implicit Processes

Simon Rodrı́guez-Santana, Bryan Zaldivar, Daniel Hernandez-Lobato
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18723-18740, 2022.

Abstract

Implicit Processes (IPs) represent a flexible framework that can be used to describe a wide variety of models, from Bayesian neural networks, neural samplers and data generators to many others. IPs also allow for approximate inference in function-space. This change of formulation solves intrinsic degenerate problems of parameter-space approximate inference concerning the high number of parameters and their strong dependencies in large models. For this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot tune the prior IP to the observed data. We propose here the first method that can accomplish both goals. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-rodri-guez-santana22a, title = {Function-space Inference with Sparse Implicit Processes}, author = {Rodr\'{\i}guez-Santana, Simon and Zaldivar, Bryan and Hernandez-Lobato, Daniel}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18723--18740}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/rodri-guez-santana22a/rodri-guez-santana22a.pdf}, url = {https://proceedings.mlr.press/v162/rodri-guez-santana22a.html}, abstract = {Implicit Processes (IPs) represent a flexible framework that can be used to describe a wide variety of models, from Bayesian neural networks, neural samplers and data generators to many others. IPs also allow for approximate inference in function-space. This change of formulation solves intrinsic degenerate problems of parameter-space approximate inference concerning the high number of parameters and their strong dependencies in large models. For this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot tune the prior IP to the observed data. We propose here the first method that can accomplish both goals. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.} }
Endnote
%0 Conference Paper %T Function-space Inference with Sparse Implicit Processes %A Simon Rodrı́guez-Santana %A Bryan Zaldivar %A Daniel Hernandez-Lobato %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-rodri-guez-santana22a %I PMLR %P 18723--18740 %U https://proceedings.mlr.press/v162/rodri-guez-santana22a.html %V 162 %X Implicit Processes (IPs) represent a flexible framework that can be used to describe a wide variety of models, from Bayesian neural networks, neural samplers and data generators to many others. IPs also allow for approximate inference in function-space. This change of formulation solves intrinsic degenerate problems of parameter-space approximate inference concerning the high number of parameters and their strong dependencies in large models. For this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot tune the prior IP to the observed data. We propose here the first method that can accomplish both goals. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.
APA
Rodrı́guez-Santana, S., Zaldivar, B. & Hernandez-Lobato, D.. (2022). Function-space Inference with Sparse Implicit Processes. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18723-18740 Available from https://proceedings.mlr.press/v162/rodri-guez-santana22a.html.

Related Material