Contrasting the landscape of contrastive and non-contrastive learning

Ashwini Pokle, Jinjin Tian, Yuchen Li, Andrej Risteski
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8592-8618, 2022.

Abstract

A lot of recent advances in unsupervised feature learning are based on designing features which are invariant under semantic data augmentations. A common way to do this is contrastive learning, which uses positive and negative samples. Some recent works however have shown promising results for non-contrastive learning, which does not require negative samples. However, the non-contrastive losses have obvious “collapsed” minima, in which the encoders output a constant feature embedding, independent of the input. A folk conjecture is that so long as these collapsed solutions are avoided, the produced feature representations should be good. In our paper, we cast doubt on this story: we show through theoretical results and controlled experiments that even on simple data models, non-contrastive losses have a preponderance of non-collapsed bad minima. Moreover, we show that the training process does not avoid these minima. Code for this work can be found at https://github.com/ashwinipokle/contrastive_landscape.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-pokle22a, title = { Contrasting the landscape of contrastive and non-contrastive learning }, author = {Pokle, Ashwini and Tian, Jinjin and Li, Yuchen and Risteski, Andrej}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {8592--8618}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/pokle22a/pokle22a.pdf}, url = {https://proceedings.mlr.press/v151/pokle22a.html}, abstract = { A lot of recent advances in unsupervised feature learning are based on designing features which are invariant under semantic data augmentations. A common way to do this is contrastive learning, which uses positive and negative samples. Some recent works however have shown promising results for non-contrastive learning, which does not require negative samples. However, the non-contrastive losses have obvious “collapsed” minima, in which the encoders output a constant feature embedding, independent of the input. A folk conjecture is that so long as these collapsed solutions are avoided, the produced feature representations should be good. In our paper, we cast doubt on this story: we show through theoretical results and controlled experiments that even on simple data models, non-contrastive losses have a preponderance of non-collapsed bad minima. Moreover, we show that the training process does not avoid these minima. Code for this work can be found at https://github.com/ashwinipokle/contrastive_landscape. } }
Endnote
%0 Conference Paper %T Contrasting the landscape of contrastive and non-contrastive learning %A Ashwini Pokle %A Jinjin Tian %A Yuchen Li %A Andrej Risteski %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-pokle22a %I PMLR %P 8592--8618 %U https://proceedings.mlr.press/v151/pokle22a.html %V 151 %X A lot of recent advances in unsupervised feature learning are based on designing features which are invariant under semantic data augmentations. A common way to do this is contrastive learning, which uses positive and negative samples. Some recent works however have shown promising results for non-contrastive learning, which does not require negative samples. However, the non-contrastive losses have obvious “collapsed” minima, in which the encoders output a constant feature embedding, independent of the input. A folk conjecture is that so long as these collapsed solutions are avoided, the produced feature representations should be good. In our paper, we cast doubt on this story: we show through theoretical results and controlled experiments that even on simple data models, non-contrastive losses have a preponderance of non-collapsed bad minima. Moreover, we show that the training process does not avoid these minima. Code for this work can be found at https://github.com/ashwinipokle/contrastive_landscape.
APA
Pokle, A., Tian, J., Li, Y. & Risteski, A.. (2022). Contrasting the landscape of contrastive and non-contrastive learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:8592-8618 Available from https://proceedings.mlr.press/v151/pokle22a.html.

Related Material