The Edge of Orthogonality: A Simple View of What Makes BYOL Tick

Pierre Harvey Richemond, Allison Tam, Yunhao Tang, Florian Strub, Bilal Piot, Felix Hill
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:29063-29081, 2023.

Abstract

Self-predictive unsupervised learning methods such as BYOL or SimSIAM have shown impressive results, and counter-intuitively, do not collapse to trivial representations. In this work, we aim at exploring the simplest possible mathematical arguments towards explaining the underlying mechanisms behind self-predictive unsupervised learning. We start with the observation that those methods crucially rely on the presence of a predictor network (and stop-gradient). With simple linear algebra, we show that when using a linear predictor, the optimal predictor is close to an orthogonal projection, and propose a general framework based on orthonormalization that enables to interpret and give intuition on why BYOL works. In addition, this framework demonstrates the crucial role of the exponential moving average and stop-gradient operator in BYOL as an efficient orthonormalization mechanism. We use these insights to propose four new closed-form predictor variants of BYOL to support our analysis. Our closed-form predictors outperform standard linear trainable predictor BYOL at 100 and 300 epochs (top-1 linear accuracy on ImageNet).

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-richemond23a, title = {The Edge of Orthogonality: A Simple View of What Makes {BYOL} Tick}, author = {Richemond, Pierre Harvey and Tam, Allison and Tang, Yunhao and Strub, Florian and Piot, Bilal and Hill, Felix}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {29063--29081}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/richemond23a/richemond23a.pdf}, url = {https://proceedings.mlr.press/v202/richemond23a.html}, abstract = {Self-predictive unsupervised learning methods such as BYOL or SimSIAM have shown impressive results, and counter-intuitively, do not collapse to trivial representations. In this work, we aim at exploring the simplest possible mathematical arguments towards explaining the underlying mechanisms behind self-predictive unsupervised learning. We start with the observation that those methods crucially rely on the presence of a predictor network (and stop-gradient). With simple linear algebra, we show that when using a linear predictor, the optimal predictor is close to an orthogonal projection, and propose a general framework based on orthonormalization that enables to interpret and give intuition on why BYOL works. In addition, this framework demonstrates the crucial role of the exponential moving average and stop-gradient operator in BYOL as an efficient orthonormalization mechanism. We use these insights to propose four new closed-form predictor variants of BYOL to support our analysis. Our closed-form predictors outperform standard linear trainable predictor BYOL at 100 and 300 epochs (top-1 linear accuracy on ImageNet).} }
Endnote
%0 Conference Paper %T The Edge of Orthogonality: A Simple View of What Makes BYOL Tick %A Pierre Harvey Richemond %A Allison Tam %A Yunhao Tang %A Florian Strub %A Bilal Piot %A Felix Hill %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-richemond23a %I PMLR %P 29063--29081 %U https://proceedings.mlr.press/v202/richemond23a.html %V 202 %X Self-predictive unsupervised learning methods such as BYOL or SimSIAM have shown impressive results, and counter-intuitively, do not collapse to trivial representations. In this work, we aim at exploring the simplest possible mathematical arguments towards explaining the underlying mechanisms behind self-predictive unsupervised learning. We start with the observation that those methods crucially rely on the presence of a predictor network (and stop-gradient). With simple linear algebra, we show that when using a linear predictor, the optimal predictor is close to an orthogonal projection, and propose a general framework based on orthonormalization that enables to interpret and give intuition on why BYOL works. In addition, this framework demonstrates the crucial role of the exponential moving average and stop-gradient operator in BYOL as an efficient orthonormalization mechanism. We use these insights to propose four new closed-form predictor variants of BYOL to support our analysis. Our closed-form predictors outperform standard linear trainable predictor BYOL at 100 and 300 epochs (top-1 linear accuracy on ImageNet).
APA
Richemond, P.H., Tam, A., Tang, Y., Strub, F., Piot, B. & Hill, F.. (2023). The Edge of Orthogonality: A Simple View of What Makes BYOL Tick. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:29063-29081 Available from https://proceedings.mlr.press/v202/richemond23a.html.

Related Material