Sparse Gaussian Neural Processes

Tommy Rochussen, Vincent Fortuin
Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference, PMLR 289:194-219, 2025.

Abstract

Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.

Cite this Paper


BibTeX
@InProceedings{pmlr-v289-rochussen25a, title = {Sparse {G}aussian Neural Processes}, author = {Rochussen, Tommy and Fortuin, Vincent}, booktitle = {Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference}, pages = {194--219}, year = {2025}, editor = {Allingham, James Urquhart and Swaroop, Siddharth}, volume = {289}, series = {Proceedings of Machine Learning Research}, month = {29 Apr}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v289/main/assets/rochussen25a/rochussen25a.pdf}, url = {https://proceedings.mlr.press/v289/rochussen25a.html}, abstract = {Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.} }
Endnote
%0 Conference Paper %T Sparse Gaussian Neural Processes %A Tommy Rochussen %A Vincent Fortuin %B Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference %C Proceedings of Machine Learning Research %D 2025 %E James Urquhart Allingham %E Siddharth Swaroop %F pmlr-v289-rochussen25a %I PMLR %P 194--219 %U https://proceedings.mlr.press/v289/rochussen25a.html %V 289 %X Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.
APA
Rochussen, T. & Fortuin, V.. (2025). Sparse Gaussian Neural Processes. Proceedings of the 7th Symposium on Advances in Approximate Bayesian Inference, in Proceedings of Machine Learning Research 289:194-219 Available from https://proceedings.mlr.press/v289/rochussen25a.html.

Related Material