Contrastive learning, multi-view redundancy, and linear models

Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:1179-1206, 2021.

Abstract

Self-supervised learning is an empirically successful approach to unsupervised learning based on creating artificial supervised learning problems. A popular self-supervised approach to representation learning is contrastive learning, which leverages naturally occurring pairs of similar and dissimilar data points, or multiple views of the same data. This work provides a theoretical analysis of contrastive learning in the multi-view setting, where two views of each datum are available. The main result is that linear functions of the learned representations are nearly optimal on downstream prediction tasks whenever the two views provide redundant information about the label.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-tosh21a, title = {Contrastive learning, multi-view redundancy, and linear models}, author = {Tosh, Christopher and Krishnamurthy, Akshay and Hsu, Daniel}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {1179--1206}, year = {2021}, editor = {Vitaly Feldman and Katrina Ligett and Sivan Sabato}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/tosh21a/tosh21a.pdf}, url = { http://proceedings.mlr.press/v132/tosh21a.html }, abstract = {Self-supervised learning is an empirically successful approach to unsupervised learning based on creating artificial supervised learning problems. A popular self-supervised approach to representation learning is contrastive learning, which leverages naturally occurring pairs of similar and dissimilar data points, or multiple views of the same data. This work provides a theoretical analysis of contrastive learning in the multi-view setting, where two views of each datum are available. The main result is that linear functions of the learned representations are nearly optimal on downstream prediction tasks whenever the two views provide redundant information about the label.} }
Endnote
%0 Conference Paper %T Contrastive learning, multi-view redundancy, and linear models %A Christopher Tosh %A Akshay Krishnamurthy %A Daniel Hsu %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-tosh21a %I PMLR %P 1179--1206 %U http://proceedings.mlr.press/v132/tosh21a.html %V 132 %X Self-supervised learning is an empirically successful approach to unsupervised learning based on creating artificial supervised learning problems. A popular self-supervised approach to representation learning is contrastive learning, which leverages naturally occurring pairs of similar and dissimilar data points, or multiple views of the same data. This work provides a theoretical analysis of contrastive learning in the multi-view setting, where two views of each datum are available. The main result is that linear functions of the learned representations are nearly optimal on downstream prediction tasks whenever the two views provide redundant information about the label.
APA
Tosh, C., Krishnamurthy, A. & Hsu, D.. (2021). Contrastive learning, multi-view redundancy, and linear models. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:1179-1206 Available from http://proceedings.mlr.press/v132/tosh21a.html .

Related Material