Collapse-Proof Non-Contrastive Self-Supervised Learning

Emanuele Sansone, Tim Lebailly, Tinne Tuytelaars
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:52833-52860, 2025.

Abstract

We present a principled and simplified design of the projector and loss function for non-contrastive self-supervised learning based on hyperdimensional computing. We theoretically demonstrate that this design introduces an inductive bias that encourages representations to be simultaneously decorrelated and clustered, without explicitly enforcing these properties. This bias provably enhances generalization and suffices to avoid known training failure modes, such as representation, dimensional, cluster, and intracluster collapses. We validate our theoretical findings on image datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet-100. Our approach effectively combines the strengths of feature decorrelation and cluster-based self-supervised learning methods, overcoming training failure modes while achieving strong generalization in clustering and linear classification tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-sansone25a, title = {Collapse-Proof Non-Contrastive Self-Supervised Learning}, author = {Sansone, Emanuele and Lebailly, Tim and Tuytelaars, Tinne}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {52833--52860}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/sansone25a/sansone25a.pdf}, url = {https://proceedings.mlr.press/v267/sansone25a.html}, abstract = {We present a principled and simplified design of the projector and loss function for non-contrastive self-supervised learning based on hyperdimensional computing. We theoretically demonstrate that this design introduces an inductive bias that encourages representations to be simultaneously decorrelated and clustered, without explicitly enforcing these properties. This bias provably enhances generalization and suffices to avoid known training failure modes, such as representation, dimensional, cluster, and intracluster collapses. We validate our theoretical findings on image datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet-100. Our approach effectively combines the strengths of feature decorrelation and cluster-based self-supervised learning methods, overcoming training failure modes while achieving strong generalization in clustering and linear classification tasks.} }
Endnote
%0 Conference Paper %T Collapse-Proof Non-Contrastive Self-Supervised Learning %A Emanuele Sansone %A Tim Lebailly %A Tinne Tuytelaars %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-sansone25a %I PMLR %P 52833--52860 %U https://proceedings.mlr.press/v267/sansone25a.html %V 267 %X We present a principled and simplified design of the projector and loss function for non-contrastive self-supervised learning based on hyperdimensional computing. We theoretically demonstrate that this design introduces an inductive bias that encourages representations to be simultaneously decorrelated and clustered, without explicitly enforcing these properties. This bias provably enhances generalization and suffices to avoid known training failure modes, such as representation, dimensional, cluster, and intracluster collapses. We validate our theoretical findings on image datasets, including SVHN, CIFAR-10, CIFAR-100, and ImageNet-100. Our approach effectively combines the strengths of feature decorrelation and cluster-based self-supervised learning methods, overcoming training failure modes while achieving strong generalization in clustering and linear classification tasks.
APA
Sansone, E., Lebailly, T. & Tuytelaars, T.. (2025). Collapse-Proof Non-Contrastive Self-Supervised Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:52833-52860 Available from https://proceedings.mlr.press/v267/sansone25a.html.

Related Material