Improving Online Continual Learning Performance and Stability with Temporal Ensembles

Albin Soutif–Cormerais, Antonio Carta, Joost van de Weijer
Proceedings of The 2nd Conference on Lifelong Learning Agents, PMLR 232:828-845, 2023.

Abstract

Neural networks are very effective when trained on large datasets for a large number of iterations. However, when they are trained on non-stationary streams of data and in an online fashion, their performance is reduced (1) by the online setup, which limits the availability of data, (2) due to catastrophic forgetting because of the non-stationary nature of the data. Furthermore, several recent works (Caccia et al. 2022, Lange et al. 2023) showed that replay methods used in continual learning suffer from the \textit{stability gap}, encountered when evaluating the model continually (rather than only on task boundaries). In this article, we study the effect of model ensembling as a way to improve performance and stability in online continual learning. We notice that naively ensembling models coming from a variety of training tasks increases the performance in online continual learning considerably. Starting from this observation, and drawing inspirations from semi-supervised learning ensembling methods, we use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time, and show that it can drastically increase the performance and stability when used in combination with several methods from the literature.

Cite this Paper


BibTeX
@InProceedings{pmlr-v232-soutif-cormerais23a, title = {Improving Online Continual Learning Performance and Stability with Temporal Ensembles}, author = {Soutif--Cormerais, Albin and Carta, Antonio and van de Weijer, Joost}, booktitle = {Proceedings of The 2nd Conference on Lifelong Learning Agents}, pages = {828--845}, year = {2023}, editor = {Chandar, Sarath and Pascanu, Razvan and Sedghi, Hanie and Precup, Doina}, volume = {232}, series = {Proceedings of Machine Learning Research}, month = {22--25 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v232/soutif-cormerais23a/soutif-cormerais23a.pdf}, url = {https://proceedings.mlr.press/v232/soutif-cormerais23a.html}, abstract = {Neural networks are very effective when trained on large datasets for a large number of iterations. However, when they are trained on non-stationary streams of data and in an online fashion, their performance is reduced (1) by the online setup, which limits the availability of data, (2) due to catastrophic forgetting because of the non-stationary nature of the data. Furthermore, several recent works (Caccia et al. 2022, Lange et al. 2023) showed that replay methods used in continual learning suffer from the \textit{stability gap}, encountered when evaluating the model continually (rather than only on task boundaries). In this article, we study the effect of model ensembling as a way to improve performance and stability in online continual learning. We notice that naively ensembling models coming from a variety of training tasks increases the performance in online continual learning considerably. Starting from this observation, and drawing inspirations from semi-supervised learning ensembling methods, we use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time, and show that it can drastically increase the performance and stability when used in combination with several methods from the literature.} }
Endnote
%0 Conference Paper %T Improving Online Continual Learning Performance and Stability with Temporal Ensembles %A Albin Soutif–Cormerais %A Antonio Carta %A Joost van de Weijer %B Proceedings of The 2nd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2023 %E Sarath Chandar %E Razvan Pascanu %E Hanie Sedghi %E Doina Precup %F pmlr-v232-soutif-cormerais23a %I PMLR %P 828--845 %U https://proceedings.mlr.press/v232/soutif-cormerais23a.html %V 232 %X Neural networks are very effective when trained on large datasets for a large number of iterations. However, when they are trained on non-stationary streams of data and in an online fashion, their performance is reduced (1) by the online setup, which limits the availability of data, (2) due to catastrophic forgetting because of the non-stationary nature of the data. Furthermore, several recent works (Caccia et al. 2022, Lange et al. 2023) showed that replay methods used in continual learning suffer from the \textit{stability gap}, encountered when evaluating the model continually (rather than only on task boundaries). In this article, we study the effect of model ensembling as a way to improve performance and stability in online continual learning. We notice that naively ensembling models coming from a variety of training tasks increases the performance in online continual learning considerably. Starting from this observation, and drawing inspirations from semi-supervised learning ensembling methods, we use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time, and show that it can drastically increase the performance and stability when used in combination with several methods from the literature.
APA
Soutif–Cormerais, A., Carta, A. & van de Weijer, J.. (2023). Improving Online Continual Learning Performance and Stability with Temporal Ensembles. Proceedings of The 2nd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 232:828-845 Available from https://proceedings.mlr.press/v232/soutif-cormerais23a.html.

Related Material