Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework

Ching-Yun Ko, Jeet Mohapatra, Sijia Liu, Pin-Yu Chen, Luca Daniel, Lily Weng
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11387-11412, 2022.

Abstract

As a seminal tool in self-supervised representation learning, contrastive learning has gained unprecedented attention in recent years. In essence, contrastive learning aims to leverage pairs of positive and negative samples for representation learning, which relates to exploiting neighborhood information in a feature space. By investigating the connection between contrastive learning and neighborhood component analysis (NCA), we provide a novel stochastic nearest neighbor viewpoint of contrastive learning and subsequently propose a series of contrastive losses that outperform the existing ones. Under our proposed framework, we show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks. With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the robust accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ko22a, title = {Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework}, author = {Ko, Ching-Yun and Mohapatra, Jeet and Liu, Sijia and Chen, Pin-Yu and Daniel, Luca and Weng, Lily}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11387--11412}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ko22a/ko22a.pdf}, url = {https://proceedings.mlr.press/v162/ko22a.html}, abstract = {As a seminal tool in self-supervised representation learning, contrastive learning has gained unprecedented attention in recent years. In essence, contrastive learning aims to leverage pairs of positive and negative samples for representation learning, which relates to exploiting neighborhood information in a feature space. By investigating the connection between contrastive learning and neighborhood component analysis (NCA), we provide a novel stochastic nearest neighbor viewpoint of contrastive learning and subsequently propose a series of contrastive losses that outperform the existing ones. Under our proposed framework, we show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks. With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the robust accuracy.} }
Endnote
%0 Conference Paper %T Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework %A Ching-Yun Ko %A Jeet Mohapatra %A Sijia Liu %A Pin-Yu Chen %A Luca Daniel %A Lily Weng %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ko22a %I PMLR %P 11387--11412 %U https://proceedings.mlr.press/v162/ko22a.html %V 162 %X As a seminal tool in self-supervised representation learning, contrastive learning has gained unprecedented attention in recent years. In essence, contrastive learning aims to leverage pairs of positive and negative samples for representation learning, which relates to exploiting neighborhood information in a feature space. By investigating the connection between contrastive learning and neighborhood component analysis (NCA), we provide a novel stochastic nearest neighbor viewpoint of contrastive learning and subsequently propose a series of contrastive losses that outperform the existing ones. Under our proposed framework, we show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks. With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the robust accuracy.
APA
Ko, C., Mohapatra, J., Liu, S., Chen, P., Daniel, L. & Weng, L.. (2022). Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11387-11412 Available from https://proceedings.mlr.press/v162/ko22a.html.

Related Material