Do More Negative Samples Necessarily Hurt In Contrastive Learning?

Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1101-1116, 2022.

Abstract

Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more “negative samples” in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a “collision-coverage” trade-off. But is such a phenomenon inherent in contrastive learning? We show in a simple theoretical setting, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation in our framework, for noise contrastive estimation. We also provide empirical support for our theoretical results on CIFAR-10 and CIFAR-100 datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-awasthi22b, title = {Do More Negative Samples Necessarily Hurt In Contrastive Learning?}, author = {Awasthi, Pranjal and Dikkala, Nishanth and Kamath, Pritish}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {1101--1116}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/awasthi22b/awasthi22b.pdf}, url = {https://proceedings.mlr.press/v162/awasthi22b.html}, abstract = {Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more “negative samples” in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a “collision-coverage” trade-off. But is such a phenomenon inherent in contrastive learning? We show in a simple theoretical setting, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation in our framework, for noise contrastive estimation. We also provide empirical support for our theoretical results on CIFAR-10 and CIFAR-100 datasets.} }
Endnote
%0 Conference Paper %T Do More Negative Samples Necessarily Hurt In Contrastive Learning? %A Pranjal Awasthi %A Nishanth Dikkala %A Pritish Kamath %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-awasthi22b %I PMLR %P 1101--1116 %U https://proceedings.mlr.press/v162/awasthi22b.html %V 162 %X Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more “negative samples” in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a “collision-coverage” trade-off. But is such a phenomenon inherent in contrastive learning? We show in a simple theoretical setting, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation in our framework, for noise contrastive estimation. We also provide empirical support for our theoretical results on CIFAR-10 and CIFAR-100 datasets.
APA
Awasthi, P., Dikkala, N. & Kamath, P.. (2022). Do More Negative Samples Necessarily Hurt In Contrastive Learning?. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:1101-1116 Available from https://proceedings.mlr.press/v162/awasthi22b.html.

Related Material