Learning to learn with Gaussian processes

Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1466-1475, 2021.

Abstract

This paper presents Gaussian process meta-learning (GPML) for few-shot regression, which explicitly exploits the distance between regression problems/tasks using a novel task kernel. It contrasts sharply with the popular metric-based meta-learning approach which is based on the distance between data inputs or their embeddings in the few-shot learning literature. Apart from the superior predictive performance by capturing the diversity of different tasks, GPML offers a set of representative tasks that are useful for understanding the task distribution. We empirically demonstrate the performance and interpretability of GPML in several few-shot regression problems involving a multimodal task distribution and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-nguyen21c, title = {Learning to learn with Gaussian processes}, author = {Nguyen, Quoc Phong and Low, Bryan Kian Hsiang and Jaillet, Patrick}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1466--1475}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/nguyen21c/nguyen21c.pdf}, url = {https://proceedings.mlr.press/v161/nguyen21c.html}, abstract = {This paper presents Gaussian process meta-learning (GPML) for few-shot regression, which explicitly exploits the distance between regression problems/tasks using a novel task kernel. It contrasts sharply with the popular metric-based meta-learning approach which is based on the distance between data inputs or their embeddings in the few-shot learning literature. Apart from the superior predictive performance by capturing the diversity of different tasks, GPML offers a set of representative tasks that are useful for understanding the task distribution. We empirically demonstrate the performance and interpretability of GPML in several few-shot regression problems involving a multimodal task distribution and real-world datasets.} }
Endnote
%0 Conference Paper %T Learning to learn with Gaussian processes %A Quoc Phong Nguyen %A Bryan Kian Hsiang Low %A Patrick Jaillet %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-nguyen21c %I PMLR %P 1466--1475 %U https://proceedings.mlr.press/v161/nguyen21c.html %V 161 %X This paper presents Gaussian process meta-learning (GPML) for few-shot regression, which explicitly exploits the distance between regression problems/tasks using a novel task kernel. It contrasts sharply with the popular metric-based meta-learning approach which is based on the distance between data inputs or their embeddings in the few-shot learning literature. Apart from the superior predictive performance by capturing the diversity of different tasks, GPML offers a set of representative tasks that are useful for understanding the task distribution. We empirically demonstrate the performance and interpretability of GPML in several few-shot regression problems involving a multimodal task distribution and real-world datasets.
APA
Nguyen, Q.P., Low, B.K.H. & Jaillet, P.. (2021). Learning to learn with Gaussian processes. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1466-1475 Available from https://proceedings.mlr.press/v161/nguyen21c.html.

Related Material