Local vs Global continual learning

Giulia Lanzillotta, Sidak Pal Singh, Benjamin F Grewe, Thomas Hofmann
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:121-143, 2025.

Abstract

Continual learning is the problem of integrating new information in a model while retaining the knowledge acquired in the past. Despite the tangible improvements achieved in recent years, continual learning is far from solved, and even farther from being understood. A better understanding of the mechanisms behind the successes and failures of existing continual learning algorithms can unlock the development of new successful strategies. In this work we view continual learning from the perspective of the multi-task loss approximation, and we compare two alternative strategies, namely local and global approximations. We classify existing continual learning algorithms based on the approximation used, and we assess the practical effects of this distinction in common continual learning settings. Additionally, we study optimal continual learning objectives in the case of local polynomial approximations and we provide examples of existing algorithms implementing the optimal objectives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-lanzillotta25a, title = {Local vs Global continual learning}, author = {Lanzillotta, Giulia and Singh, Sidak Pal and Grewe, Benjamin F and Hofmann, Thomas}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {121--143}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/lanzillotta25a/lanzillotta25a.pdf}, url = {https://proceedings.mlr.press/v274/lanzillotta25a.html}, abstract = {Continual learning is the problem of integrating new information in a model while retaining the knowledge acquired in the past. Despite the tangible improvements achieved in recent years, continual learning is far from solved, and even farther from being understood. A better understanding of the mechanisms behind the successes and failures of existing continual learning algorithms can unlock the development of new successful strategies. In this work we view continual learning from the perspective of the multi-task loss approximation, and we compare two alternative strategies, namely local and global approximations. We classify existing continual learning algorithms based on the approximation used, and we assess the practical effects of this distinction in common continual learning settings. Additionally, we study optimal continual learning objectives in the case of local polynomial approximations and we provide examples of existing algorithms implementing the optimal objectives.} }
Endnote
%0 Conference Paper %T Local vs Global continual learning %A Giulia Lanzillotta %A Sidak Pal Singh %A Benjamin F Grewe %A Thomas Hofmann %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-lanzillotta25a %I PMLR %P 121--143 %U https://proceedings.mlr.press/v274/lanzillotta25a.html %V 274 %X Continual learning is the problem of integrating new information in a model while retaining the knowledge acquired in the past. Despite the tangible improvements achieved in recent years, continual learning is far from solved, and even farther from being understood. A better understanding of the mechanisms behind the successes and failures of existing continual learning algorithms can unlock the development of new successful strategies. In this work we view continual learning from the perspective of the multi-task loss approximation, and we compare two alternative strategies, namely local and global approximations. We classify existing continual learning algorithms based on the approximation used, and we assess the practical effects of this distinction in common continual learning settings. Additionally, we study optimal continual learning objectives in the case of local polynomial approximations and we provide examples of existing algorithms implementing the optimal objectives.
APA
Lanzillotta, G., Singh, S.P., Grewe, B.F. & Hofmann, T.. (2025). Local vs Global continual learning. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:121-143 Available from https://proceedings.mlr.press/v274/lanzillotta25a.html.

Related Material