Variational Implicit Processes

Chao Ma, Yingzhen Li, Jose Miguel Hernandez-Lobato
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:4222-4233, 2019.

Abstract

We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-ma19b, title = {Variational Implicit Processes}, author = {Ma, Chao and Li, Yingzhen and Hernandez-Lobato, Jose Miguel}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {4222--4233}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/ma19b/ma19b.pdf}, url = {https://proceedings.mlr.press/v97/ma19b.html}, abstract = {We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.} }
Endnote
%0 Conference Paper %T Variational Implicit Processes %A Chao Ma %A Yingzhen Li %A Jose Miguel Hernandez-Lobato %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-ma19b %I PMLR %P 4222--4233 %U https://proceedings.mlr.press/v97/ma19b.html %V 97 %X We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables. IPs are therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes. A novel and efficient approximate inference algorithm for IPs, namely the variational implicit processes (VIPs), is derived using generalised wake-sleep updates. This method returns simple update equations and allows scalable hyper-parameter learning with stochastic optimization. Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.
APA
Ma, C., Li, Y. & Hernandez-Lobato, J.M.. (2019). Variational Implicit Processes. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:4222-4233 Available from https://proceedings.mlr.press/v97/ma19b.html.

Related Material