Function Contrastive Learning of Transferable Meta-Representations

Muhammad Waleed Gondal, Shruti Joshi, Nasim Rahaman, Stefan Bauer, Manuel Wuthrich, Bernhard Schölkopf
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3755-3765, 2021.

Abstract

Meta-learning algorithms adapt quickly to new tasks that are drawn from the same task distribution as the training tasks. The mechanism leading to fast adaptation is the conditioning of a downstream predictive model on the inferred representation of the task’s underlying data generative process, or \emph{function}. This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model. In this work, we study the implications of this joint training on the transferability of the meta-representations. Our goal is to learn meta-representations that are robust to noise in the data and facilitate solving a wide range of downstream tasks that share the same underlying functions. To this end, we propose a decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function. In particular, our training scheme is driven by the self-supervision signal indicating whether two sets of examples stem from the same function. Our experiments on a number of synthetic and real-world datasets show that the representations we obtain outperform strong baselines in terms of downstream performance and noise robustness, even when these baselines are trained in an end-to-end manner.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-gondal21a, title = {Function Contrastive Learning of Transferable Meta-Representations}, author = {Gondal, Muhammad Waleed and Joshi, Shruti and Rahaman, Nasim and Bauer, Stefan and Wuthrich, Manuel and Sch{\"o}lkopf, Bernhard}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3755--3765}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/gondal21a/gondal21a.pdf}, url = {https://proceedings.mlr.press/v139/gondal21a.html}, abstract = {Meta-learning algorithms adapt quickly to new tasks that are drawn from the same task distribution as the training tasks. The mechanism leading to fast adaptation is the conditioning of a downstream predictive model on the inferred representation of the task’s underlying data generative process, or \emph{function}. This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model. In this work, we study the implications of this joint training on the transferability of the meta-representations. Our goal is to learn meta-representations that are robust to noise in the data and facilitate solving a wide range of downstream tasks that share the same underlying functions. To this end, we propose a decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function. In particular, our training scheme is driven by the self-supervision signal indicating whether two sets of examples stem from the same function. Our experiments on a number of synthetic and real-world datasets show that the representations we obtain outperform strong baselines in terms of downstream performance and noise robustness, even when these baselines are trained in an end-to-end manner.} }
Endnote
%0 Conference Paper %T Function Contrastive Learning of Transferable Meta-Representations %A Muhammad Waleed Gondal %A Shruti Joshi %A Nasim Rahaman %A Stefan Bauer %A Manuel Wuthrich %A Bernhard Schölkopf %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-gondal21a %I PMLR %P 3755--3765 %U https://proceedings.mlr.press/v139/gondal21a.html %V 139 %X Meta-learning algorithms adapt quickly to new tasks that are drawn from the same task distribution as the training tasks. The mechanism leading to fast adaptation is the conditioning of a downstream predictive model on the inferred representation of the task’s underlying data generative process, or \emph{function}. This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model. In this work, we study the implications of this joint training on the transferability of the meta-representations. Our goal is to learn meta-representations that are robust to noise in the data and facilitate solving a wide range of downstream tasks that share the same underlying functions. To this end, we propose a decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function. In particular, our training scheme is driven by the self-supervision signal indicating whether two sets of examples stem from the same function. Our experiments on a number of synthetic and real-world datasets show that the representations we obtain outperform strong baselines in terms of downstream performance and noise robustness, even when these baselines are trained in an end-to-end manner.
APA
Gondal, M.W., Joshi, S., Rahaman, N., Bauer, S., Wuthrich, M. & Schölkopf, B.. (2021). Function Contrastive Learning of Transferable Meta-Representations. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3755-3765 Available from https://proceedings.mlr.press/v139/gondal21a.html.

Related Material