Contrastive Learning Inverts the Data Generating Process

Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12979-12990, 2021.

Abstract

Contrastive learning has recently seen tremendous success in self-supervised learning. So far, however, it is largely unclear why the learned representations generalize so effectively to a large variety of downstream tasks. We here prove that feedforward models trained with objectives belonging to the commonly used InfoNCE family learn to implicitly invert the underlying generative model of the observed data. While the proofs make certain statistical assumptions about the generative model, we observe empirically that our findings hold even if these assumptions are severely violated. Our theory highlights a fundamental connection between contrastive learning, generative modeling, and nonlinear independent component analysis, thereby furthering our understanding of the learned representations as well as providing a theoretical foundation to derive more effective contrastive losses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-zimmermann21a, title = {Contrastive Learning Inverts the Data Generating Process}, author = {Zimmermann, Roland S. and Sharma, Yash and Schneider, Steffen and Bethge, Matthias and Brendel, Wieland}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {12979--12990}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/zimmermann21a/zimmermann21a.pdf}, url = {https://proceedings.mlr.press/v139/zimmermann21a.html}, abstract = {Contrastive learning has recently seen tremendous success in self-supervised learning. So far, however, it is largely unclear why the learned representations generalize so effectively to a large variety of downstream tasks. We here prove that feedforward models trained with objectives belonging to the commonly used InfoNCE family learn to implicitly invert the underlying generative model of the observed data. While the proofs make certain statistical assumptions about the generative model, we observe empirically that our findings hold even if these assumptions are severely violated. Our theory highlights a fundamental connection between contrastive learning, generative modeling, and nonlinear independent component analysis, thereby furthering our understanding of the learned representations as well as providing a theoretical foundation to derive more effective contrastive losses.} }
Endnote
%0 Conference Paper %T Contrastive Learning Inverts the Data Generating Process %A Roland S. Zimmermann %A Yash Sharma %A Steffen Schneider %A Matthias Bethge %A Wieland Brendel %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-zimmermann21a %I PMLR %P 12979--12990 %U https://proceedings.mlr.press/v139/zimmermann21a.html %V 139 %X Contrastive learning has recently seen tremendous success in self-supervised learning. So far, however, it is largely unclear why the learned representations generalize so effectively to a large variety of downstream tasks. We here prove that feedforward models trained with objectives belonging to the commonly used InfoNCE family learn to implicitly invert the underlying generative model of the observed data. While the proofs make certain statistical assumptions about the generative model, we observe empirically that our findings hold even if these assumptions are severely violated. Our theory highlights a fundamental connection between contrastive learning, generative modeling, and nonlinear independent component analysis, thereby furthering our understanding of the learned representations as well as providing a theoretical foundation to derive more effective contrastive losses.
APA
Zimmermann, R.S., Sharma, Y., Schneider, S., Bethge, M. & Brendel, W.. (2021). Contrastive Learning Inverts the Data Generating Process. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:12979-12990 Available from https://proceedings.mlr.press/v139/zimmermann21a.html.

Related Material