CLA: Latent Alignment for Online Continual Self-Supervised Learning

Giacomo Cignoni, Andrea Cossu, Alexandra Gomez-Villa, Joost van de Weijer, Antonio Carta
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:756-775, 2026.

Abstract

Self-supervised learning (SSL) is able to build latent representations that generalize well to unseen data. However, only a few SSL techniques exist for the online CL setting, where data arrives in small minibatches, the model must comply with a fixed computational budget, and task boundaries are absent. We introduce Continual Latent Alignment (CLA), a novel SSL strategy for Online CL that aligns the representations learned by the current model with past representations to mitigate forgetting. We found that our CLA is able to speed up the convergence of the training process in the online scenario, outperforming state-of-the-art approaches under the same computational budget. Surprisingly, we also discovered that using CLA as a pretraining protocol in the early stages of pretraining leads to a better final performance when compared to a full i.i.d. pretraining.

Cite this Paper


BibTeX
@InProceedings{pmlr-v330-cignoni26a, title = {CLA: Latent Alignment for Online Continual Self-Supervised Learning}, author = {Cignoni, Giacomo and Cossu, Andrea and Gomez-Villa, Alexandra and Weijer, Joost van de and Carta, Antonio}, booktitle = {Proceedings of The 4th Conference on Lifelong Learning Agents}, pages = {756--775}, year = {2026}, editor = {Chandar, Sarath and Pascanu, Razvan and Eaton, Eric and Liu, Bing and Mahmood, Rupam and Rannen-Triki, Amal}, volume = {330}, series = {Proceedings of Machine Learning Research}, month = {11--14 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v330/main/assets/cignoni26a/cignoni26a.pdf}, url = {https://proceedings.mlr.press/v330/cignoni26a.html}, abstract = {Self-supervised learning (SSL) is able to build latent representations that generalize well to unseen data. However, only a few SSL techniques exist for the online CL setting, where data arrives in small minibatches, the model must comply with a fixed computational budget, and task boundaries are absent. We introduce Continual Latent Alignment (CLA), a novel SSL strategy for Online CL that aligns the representations learned by the current model with past representations to mitigate forgetting. We found that our CLA is able to speed up the convergence of the training process in the online scenario, outperforming state-of-the-art approaches under the same computational budget. Surprisingly, we also discovered that using CLA as a pretraining protocol in the early stages of pretraining leads to a better final performance when compared to a full i.i.d. pretraining.} }
Endnote
%0 Conference Paper %T CLA: Latent Alignment for Online Continual Self-Supervised Learning %A Giacomo Cignoni %A Andrea Cossu %A Alexandra Gomez-Villa %A Joost van de Weijer %A Antonio Carta %B Proceedings of The 4th Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2026 %E Sarath Chandar %E Razvan Pascanu %E Eric Eaton %E Bing Liu %E Rupam Mahmood %E Amal Rannen-Triki %F pmlr-v330-cignoni26a %I PMLR %P 756--775 %U https://proceedings.mlr.press/v330/cignoni26a.html %V 330 %X Self-supervised learning (SSL) is able to build latent representations that generalize well to unseen data. However, only a few SSL techniques exist for the online CL setting, where data arrives in small minibatches, the model must comply with a fixed computational budget, and task boundaries are absent. We introduce Continual Latent Alignment (CLA), a novel SSL strategy for Online CL that aligns the representations learned by the current model with past representations to mitigate forgetting. We found that our CLA is able to speed up the convergence of the training process in the online scenario, outperforming state-of-the-art approaches under the same computational budget. Surprisingly, we also discovered that using CLA as a pretraining protocol in the early stages of pretraining leads to a better final performance when compared to a full i.i.d. pretraining.
APA
Cignoni, G., Cossu, A., Gomez-Villa, A., Weijer, J.v.d. & Carta, A.. (2026). CLA: Latent Alignment for Online Continual Self-Supervised Learning. Proceedings of The 4th Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 330:756-775 Available from https://proceedings.mlr.press/v330/cignoni26a.html.

Related Material