Integrating Present and Past in Unsupervised Continual Learning

Yipeng Zhang, Laurent Charlin, Richard Zemel, Mengye Ren
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:388-409, 2025.

Abstract

We formulate a unifying framework for *unsupervised continual learning (UCL)*, which disentangles learning objectives that are specific to the present and the past data, encompassing *stability*, *plasticity*, and *cross-task consolidation*. The framework reveals that many existing UCL approaches overlook cross-task consolidation and try to balance plasticity and stability in a shared embedding space. This results in worse performance due to a lack of within-task data diversity and reduced effectiveness in learning the current task. Our method, *Osiris*, which explicitly optimizes all three objectives on separate embedding spaces, achieves state-of-the-art performance on all benchmarks, including two novel ones proposed in this paper featuring semantically structured task sequences. Finally, we show some preliminary evidence that continual models can benefit from such more realistic learning scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-zhang25a, title = {Integrating Present and Past in Unsupervised Continual Learning}, author = {Zhang, Yipeng and Charlin, Laurent and Zemel, Richard and Ren, Mengye}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {388--409}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/zhang25a/zhang25a.pdf}, url = {https://proceedings.mlr.press/v274/zhang25a.html}, abstract = {We formulate a unifying framework for *unsupervised continual learning (UCL)*, which disentangles learning objectives that are specific to the present and the past data, encompassing *stability*, *plasticity*, and *cross-task consolidation*. The framework reveals that many existing UCL approaches overlook cross-task consolidation and try to balance plasticity and stability in a shared embedding space. This results in worse performance due to a lack of within-task data diversity and reduced effectiveness in learning the current task. Our method, *Osiris*, which explicitly optimizes all three objectives on separate embedding spaces, achieves state-of-the-art performance on all benchmarks, including two novel ones proposed in this paper featuring semantically structured task sequences. Finally, we show some preliminary evidence that continual models can benefit from such more realistic learning scenarios.} }
Endnote
%0 Conference Paper %T Integrating Present and Past in Unsupervised Continual Learning %A Yipeng Zhang %A Laurent Charlin %A Richard Zemel %A Mengye Ren %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-zhang25a %I PMLR %P 388--409 %U https://proceedings.mlr.press/v274/zhang25a.html %V 274 %X We formulate a unifying framework for *unsupervised continual learning (UCL)*, which disentangles learning objectives that are specific to the present and the past data, encompassing *stability*, *plasticity*, and *cross-task consolidation*. The framework reveals that many existing UCL approaches overlook cross-task consolidation and try to balance plasticity and stability in a shared embedding space. This results in worse performance due to a lack of within-task data diversity and reduced effectiveness in learning the current task. Our method, *Osiris*, which explicitly optimizes all three objectives on separate embedding spaces, achieves state-of-the-art performance on all benchmarks, including two novel ones proposed in this paper featuring semantically structured task sequences. Finally, we show some preliminary evidence that continual models can benefit from such more realistic learning scenarios.
APA
Zhang, Y., Charlin, L., Zemel, R. & Ren, M.. (2025). Integrating Present and Past in Unsupervised Continual Learning. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:388-409 Available from https://proceedings.mlr.press/v274/zhang25a.html.

Related Material