Model-Aware Contrastive Learning: Towards Escaping the Dilemmas

Zizheng Huang, Haoxing Chen, Ziqi Wen, Chao Zhang, Huaxiong Li, Bo Wang, Chunlin Chen
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13774-13790, 2023.

Abstract

Contrastive learning (CL) continuously achieves significant breakthroughs across multiple domains. However, the most common InfoNCE-based methods suffer from some dilemmas, such as uniformity-tolerance dilemma (UTD) and gradient reduction, both of which are related to a $\mathcal{P}_{ij}$ term. It has been identified that UTD can lead to unexpected performance degradation. We argue that the fixity of temperature is to blame for UTD. To tackle this challenge, we enrich the CL loss family by presenting a Model-Aware Contrastive Learning (MACL) strategy, whose temperature is adaptive to the magnitude of alignment that reflects the basic confidence of the instance discrimination task, then enables CL loss to adjust the penalty strength for hard negatives adaptively. Regarding another dilemma, the gradient reduction issue, we derive the limits of an involved gradient scaling factor, which allows us to explain from a unified perspective why some recent approaches are effective with fewer negative samples, and summarily present a gradient reweighting to escape this dilemma. Extensive remarkable empirical results in vision, sentence, and graph modality validate our approach’s general improvement for representation learning and downstream tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-huang23c, title = {Model-Aware Contrastive Learning: Towards Escaping the Dilemmas}, author = {Huang, Zizheng and Chen, Haoxing and Wen, Ziqi and Zhang, Chao and Li, Huaxiong and Wang, Bo and Chen, Chunlin}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {13774--13790}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/huang23c/huang23c.pdf}, url = {https://proceedings.mlr.press/v202/huang23c.html}, abstract = {Contrastive learning (CL) continuously achieves significant breakthroughs across multiple domains. However, the most common InfoNCE-based methods suffer from some dilemmas, such as uniformity-tolerance dilemma (UTD) and gradient reduction, both of which are related to a $\mathcal{P}_{ij}$ term. It has been identified that UTD can lead to unexpected performance degradation. We argue that the fixity of temperature is to blame for UTD. To tackle this challenge, we enrich the CL loss family by presenting a Model-Aware Contrastive Learning (MACL) strategy, whose temperature is adaptive to the magnitude of alignment that reflects the basic confidence of the instance discrimination task, then enables CL loss to adjust the penalty strength for hard negatives adaptively. Regarding another dilemma, the gradient reduction issue, we derive the limits of an involved gradient scaling factor, which allows us to explain from a unified perspective why some recent approaches are effective with fewer negative samples, and summarily present a gradient reweighting to escape this dilemma. Extensive remarkable empirical results in vision, sentence, and graph modality validate our approach’s general improvement for representation learning and downstream tasks.} }
Endnote
%0 Conference Paper %T Model-Aware Contrastive Learning: Towards Escaping the Dilemmas %A Zizheng Huang %A Haoxing Chen %A Ziqi Wen %A Chao Zhang %A Huaxiong Li %A Bo Wang %A Chunlin Chen %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-huang23c %I PMLR %P 13774--13790 %U https://proceedings.mlr.press/v202/huang23c.html %V 202 %X Contrastive learning (CL) continuously achieves significant breakthroughs across multiple domains. However, the most common InfoNCE-based methods suffer from some dilemmas, such as uniformity-tolerance dilemma (UTD) and gradient reduction, both of which are related to a $\mathcal{P}_{ij}$ term. It has been identified that UTD can lead to unexpected performance degradation. We argue that the fixity of temperature is to blame for UTD. To tackle this challenge, we enrich the CL loss family by presenting a Model-Aware Contrastive Learning (MACL) strategy, whose temperature is adaptive to the magnitude of alignment that reflects the basic confidence of the instance discrimination task, then enables CL loss to adjust the penalty strength for hard negatives adaptively. Regarding another dilemma, the gradient reduction issue, we derive the limits of an involved gradient scaling factor, which allows us to explain from a unified perspective why some recent approaches are effective with fewer negative samples, and summarily present a gradient reweighting to escape this dilemma. Extensive remarkable empirical results in vision, sentence, and graph modality validate our approach’s general improvement for representation learning and downstream tasks.
APA
Huang, Z., Chen, H., Wen, Z., Zhang, C., Li, H., Wang, B. & Chen, C.. (2023). Model-Aware Contrastive Learning: Towards Escaping the Dilemmas. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:13774-13790 Available from https://proceedings.mlr.press/v202/huang23c.html.

Related Material