Two Complementary Perspectives to Continual Learning: Ask Not Only What to Optimize, But Also How

Timm Hess, Tinne Tuytelaars, Gido M van de Ven
Proceedings of the 1st ContinualAI Unconference, 2023, PMLR 249:37-61, 2024.

Abstract

Recent years have seen considerable progress in the continual training of deep neural networks, predominantly thanks to approaches that add replay or regularization terms to the loss function to approximate the joint loss over all tasks so far. However, we show that even with a perfect approximation to the joint loss, these approaches still suffer from temporary but substantial forgetting when starting to train on a new task. Motivated by this ‘stability gap’, we propose that continual learning strategies should focus not only on the optimization objective, but also on the way this objective is optimized. While there is some continual learning work that alters the optimization trajectory (e.g., using gradient projection techniques), this line of research is positioned as alternative to improving the optimization objective, while we argue it should be complementary. In search of empirical support for our proposition, we perform a series of pre-registered experiments combining replay-approximated joint objectives with gradient projection-based optimization routines. However, this first experimental attempt fails to show clear and consistent benefits. Nevertheless, our conceptual arguments, as well as some of our empirical results, demonstrate the distinctive importance of the optimization trajectory in continual learning, thereby opening up a new direction for continual learning research.

Cite this Paper


BibTeX
@InProceedings{pmlr-v249-hess24a, title = {Two Complementary Perspectives to Continual Learning: Ask Not Only What to Optimize, But Also How}, author = {Hess, Timm and Tuytelaars, Tinne and van de Ven, Gido M}, booktitle = {Proceedings of the 1st ContinualAI Unconference, 2023}, pages = {37--61}, year = {2024}, editor = {Swaroop, Siddharth and Mundt, Martin and Aljundi, Rahaf and Khan, Mohammad Emtiyaz}, volume = {249}, series = {Proceedings of Machine Learning Research}, month = {09 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v249/main/assets/hess24a/hess24a.pdf}, url = {https://proceedings.mlr.press/v249/hess24a.html}, abstract = {Recent years have seen considerable progress in the continual training of deep neural networks, predominantly thanks to approaches that add replay or regularization terms to the loss function to approximate the joint loss over all tasks so far. However, we show that even with a perfect approximation to the joint loss, these approaches still suffer from temporary but substantial forgetting when starting to train on a new task. Motivated by this ‘stability gap’, we propose that continual learning strategies should focus not only on the optimization objective, but also on the way this objective is optimized. While there is some continual learning work that alters the optimization trajectory (e.g., using gradient projection techniques), this line of research is positioned as alternative to improving the optimization objective, while we argue it should be complementary. In search of empirical support for our proposition, we perform a series of pre-registered experiments combining replay-approximated joint objectives with gradient projection-based optimization routines. However, this first experimental attempt fails to show clear and consistent benefits. Nevertheless, our conceptual arguments, as well as some of our empirical results, demonstrate the distinctive importance of the optimization trajectory in continual learning, thereby opening up a new direction for continual learning research.} }
Endnote
%0 Conference Paper %T Two Complementary Perspectives to Continual Learning: Ask Not Only What to Optimize, But Also How %A Timm Hess %A Tinne Tuytelaars %A Gido M van de Ven %B Proceedings of the 1st ContinualAI Unconference, 2023 %C Proceedings of Machine Learning Research %D 2024 %E Siddharth Swaroop %E Martin Mundt %E Rahaf Aljundi %E Mohammad Emtiyaz Khan %F pmlr-v249-hess24a %I PMLR %P 37--61 %U https://proceedings.mlr.press/v249/hess24a.html %V 249 %X Recent years have seen considerable progress in the continual training of deep neural networks, predominantly thanks to approaches that add replay or regularization terms to the loss function to approximate the joint loss over all tasks so far. However, we show that even with a perfect approximation to the joint loss, these approaches still suffer from temporary but substantial forgetting when starting to train on a new task. Motivated by this ‘stability gap’, we propose that continual learning strategies should focus not only on the optimization objective, but also on the way this objective is optimized. While there is some continual learning work that alters the optimization trajectory (e.g., using gradient projection techniques), this line of research is positioned as alternative to improving the optimization objective, while we argue it should be complementary. In search of empirical support for our proposition, we perform a series of pre-registered experiments combining replay-approximated joint objectives with gradient projection-based optimization routines. However, this first experimental attempt fails to show clear and consistent benefits. Nevertheless, our conceptual arguments, as well as some of our empirical results, demonstrate the distinctive importance of the optimization trajectory in continual learning, thereby opening up a new direction for continual learning research.
APA
Hess, T., Tuytelaars, T. & van de Ven, G.M.. (2024). Two Complementary Perspectives to Continual Learning: Ask Not Only What to Optimize, But Also How. Proceedings of the 1st ContinualAI Unconference, 2023, in Proceedings of Machine Learning Research 249:37-61 Available from https://proceedings.mlr.press/v249/hess24a.html.

Related Material