The Ideal Continual Learner: An Agent That Never Forgets

Liangzu Peng, Paris Giampouras, Rene Vidal
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:27585-27610, 2023.

Abstract

The goal of continual learning is to find a model that solves multiple learning tasks which are presented sequentially to the learner. A key challenge in this setting is that the learner may "forget" how to solve a previous task when learning a new task, a phenomenon known as catastrophic forgetting. To address this challenge, many practical methods have been proposed, including memory-based, regularization-based and expansion-based methods. However, a rigorous theoretical understanding of these methods remains elusive. This paper aims to bridge this gap between theory and practice by proposing a new continual learning framework called "Ideal Continual Learner" (ICL), which is guaranteed to avoid catastrophic forgetting by construction. We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods. We also derive generalization bounds for ICL which allow us to theoretically quantify "how rehearsal affects generalization". Finally, we connect ICL to several classic subjects and research topics of modern interest, which allows us to make historical remarks and inspire future directions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-peng23a, title = {The Ideal Continual Learner: An Agent That Never Forgets}, author = {Peng, Liangzu and Giampouras, Paris and Vidal, Rene}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {27585--27610}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/peng23a/peng23a.pdf}, url = {https://proceedings.mlr.press/v202/peng23a.html}, abstract = {The goal of continual learning is to find a model that solves multiple learning tasks which are presented sequentially to the learner. A key challenge in this setting is that the learner may "forget" how to solve a previous task when learning a new task, a phenomenon known as catastrophic forgetting. To address this challenge, many practical methods have been proposed, including memory-based, regularization-based and expansion-based methods. However, a rigorous theoretical understanding of these methods remains elusive. This paper aims to bridge this gap between theory and practice by proposing a new continual learning framework called "Ideal Continual Learner" (ICL), which is guaranteed to avoid catastrophic forgetting by construction. We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods. We also derive generalization bounds for ICL which allow us to theoretically quantify "how rehearsal affects generalization". Finally, we connect ICL to several classic subjects and research topics of modern interest, which allows us to make historical remarks and inspire future directions.} }
Endnote
%0 Conference Paper %T The Ideal Continual Learner: An Agent That Never Forgets %A Liangzu Peng %A Paris Giampouras %A Rene Vidal %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-peng23a %I PMLR %P 27585--27610 %U https://proceedings.mlr.press/v202/peng23a.html %V 202 %X The goal of continual learning is to find a model that solves multiple learning tasks which are presented sequentially to the learner. A key challenge in this setting is that the learner may "forget" how to solve a previous task when learning a new task, a phenomenon known as catastrophic forgetting. To address this challenge, many practical methods have been proposed, including memory-based, regularization-based and expansion-based methods. However, a rigorous theoretical understanding of these methods remains elusive. This paper aims to bridge this gap between theory and practice by proposing a new continual learning framework called "Ideal Continual Learner" (ICL), which is guaranteed to avoid catastrophic forgetting by construction. We show that ICL unifies multiple well-established continual learning methods and gives new theoretical insights into the strengths and weaknesses of these methods. We also derive generalization bounds for ICL which allow us to theoretically quantify "how rehearsal affects generalization". Finally, we connect ICL to several classic subjects and research topics of modern interest, which allows us to make historical remarks and inspire future directions.
APA
Peng, L., Giampouras, P. & Vidal, R.. (2023). The Ideal Continual Learner: An Agent That Never Forgets. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:27585-27610 Available from https://proceedings.mlr.press/v202/peng23a.html.

Related Material