Silhouette Distance Loss for Learning Few-Shot Contrastive Representations

Pinitas Kosmas, Nemanja Rasajski, Konstantinas Mankantasis, Yannakakis Georgios
Proceedings of The Workshop on Classifier Learning from Difficult Data, PMLR 263:32-39, 2024.

Abstract

Conventional supervised contrastive learning methods excel in optimising encoders for discriminative tasks. In scenarios where only a few labelled samples are available, however, they struggle in eliminating the inductive bias when transferring from source to target classes. This is a byproduct (and inherent limitation) of their underlying optimisation process that involves training a representation to maximise class separation, without directly optimising for within-class cohesion. As a response to this limitation this paper introduces the Silhouette Distance (SD) loss, a new optimisation objective for supervised contrastive representation learning. SD aims to enhance the quality of learned embeddings by emphasising both the cohesion and separation of representation clusters for each class. We test SD extensively across several few-shot learning scenarios—where labelled data is limited—and we compare its performance against supervised contrastive loss and prototypical network loss for various text and image classification tasks. We also test SD in a cross-domain manner, by training a model on one dataset and testing it on another, within the same modality. Our results demonstrate the superior, at worst competitive, performance of the SD loss compared to its baselines. By leveraging pre-trained models and fine-tuning techniques, our study highlights how the SD loss can effectively improve representation learning across different modalities and domains. This initial study showcases the potential of the SD loss as a robust alternative within the few-shot learning setting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v263-kosmas24a, title = {Silhouette Distance Loss for Learning Few-Shot Contrastive Representations}, author = {Kosmas, Pinitas and Rasajski, Nemanja and Mankantasis, Konstantinas and Georgios, Yannakakis}, booktitle = {Proceedings of The Workshop on Classifier Learning from Difficult Data}, pages = {32--39}, year = {2024}, editor = {Zyblewski, Pawel and Grana, Manuel and Pawel, Ksieniewicz and Minku, Leandro}, volume = {263}, series = {Proceedings of Machine Learning Research}, month = {19--20 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v263/main/assets/kosmas24a/kosmas24a.pdf}, url = {https://proceedings.mlr.press/v263/kosmas24a.html}, abstract = {Conventional supervised contrastive learning methods excel in optimising encoders for discriminative tasks. In scenarios where only a few labelled samples are available, however, they struggle in eliminating the inductive bias when transferring from source to target classes. This is a byproduct (and inherent limitation) of their underlying optimisation process that involves training a representation to maximise class separation, without directly optimising for within-class cohesion. As a response to this limitation this paper introduces the Silhouette Distance (SD) loss, a new optimisation objective for supervised contrastive representation learning. SD aims to enhance the quality of learned embeddings by emphasising both the cohesion and separation of representation clusters for each class. We test SD extensively across several few-shot learning scenarios—where labelled data is limited—and we compare its performance against supervised contrastive loss and prototypical network loss for various text and image classification tasks. We also test SD in a cross-domain manner, by training a model on one dataset and testing it on another, within the same modality. Our results demonstrate the superior, at worst competitive, performance of the SD loss compared to its baselines. By leveraging pre-trained models and fine-tuning techniques, our study highlights how the SD loss can effectively improve representation learning across different modalities and domains. This initial study showcases the potential of the SD loss as a robust alternative within the few-shot learning setting.} }
Endnote
%0 Conference Paper %T Silhouette Distance Loss for Learning Few-Shot Contrastive Representations %A Pinitas Kosmas %A Nemanja Rasajski %A Konstantinas Mankantasis %A Yannakakis Georgios %B Proceedings of The Workshop on Classifier Learning from Difficult Data %C Proceedings of Machine Learning Research %D 2024 %E Pawel Zyblewski %E Manuel Grana %E Ksieniewicz Pawel %E Leandro Minku %F pmlr-v263-kosmas24a %I PMLR %P 32--39 %U https://proceedings.mlr.press/v263/kosmas24a.html %V 263 %X Conventional supervised contrastive learning methods excel in optimising encoders for discriminative tasks. In scenarios where only a few labelled samples are available, however, they struggle in eliminating the inductive bias when transferring from source to target classes. This is a byproduct (and inherent limitation) of their underlying optimisation process that involves training a representation to maximise class separation, without directly optimising for within-class cohesion. As a response to this limitation this paper introduces the Silhouette Distance (SD) loss, a new optimisation objective for supervised contrastive representation learning. SD aims to enhance the quality of learned embeddings by emphasising both the cohesion and separation of representation clusters for each class. We test SD extensively across several few-shot learning scenarios—where labelled data is limited—and we compare its performance against supervised contrastive loss and prototypical network loss for various text and image classification tasks. We also test SD in a cross-domain manner, by training a model on one dataset and testing it on another, within the same modality. Our results demonstrate the superior, at worst competitive, performance of the SD loss compared to its baselines. By leveraging pre-trained models and fine-tuning techniques, our study highlights how the SD loss can effectively improve representation learning across different modalities and domains. This initial study showcases the potential of the SD loss as a robust alternative within the few-shot learning setting.
APA
Kosmas, P., Rasajski, N., Mankantasis, K. & Georgios, Y.. (2024). Silhouette Distance Loss for Learning Few-Shot Contrastive Representations. Proceedings of The Workshop on Classifier Learning from Difficult Data, in Proceedings of Machine Learning Research 263:32-39 Available from https://proceedings.mlr.press/v263/kosmas24a.html.

Related Material