Parabolic Continual Learning

Haoming Yang, Ali Hasan, Vahid Tarokh
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2620-2628, 2025.

Abstract

Regularizing continual learning techniques is important for anticipating algorithmic behavior under new realizations of data. We introduce a new approach to continual learning by imposing the properties of a parabolic partial differential equation (PDE) to regularize the expected behavior of the loss over time. This class of parabolic PDEs has a number of favorable properties that allow us to analyze the error incurred through forgetting and the error induced through generalization. Specifically, we do this through imposing boundary conditions where the boundary is given by a memory buffer. By using the memory buffer as a boundary, we can enforce long term dependencies by bounding the expected error by the boundary loss. Finally, we illustrate the empirical performance of the method on a series of continual learning tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-yang25b, title = {Parabolic Continual Learning}, author = {Yang, Haoming and Hasan, Ali and Tarokh, Vahid}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2620--2628}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/yang25b/yang25b.pdf}, url = {https://proceedings.mlr.press/v258/yang25b.html}, abstract = {Regularizing continual learning techniques is important for anticipating algorithmic behavior under new realizations of data. We introduce a new approach to continual learning by imposing the properties of a parabolic partial differential equation (PDE) to regularize the expected behavior of the loss over time. This class of parabolic PDEs has a number of favorable properties that allow us to analyze the error incurred through forgetting and the error induced through generalization. Specifically, we do this through imposing boundary conditions where the boundary is given by a memory buffer. By using the memory buffer as a boundary, we can enforce long term dependencies by bounding the expected error by the boundary loss. Finally, we illustrate the empirical performance of the method on a series of continual learning tasks.} }
Endnote
%0 Conference Paper %T Parabolic Continual Learning %A Haoming Yang %A Ali Hasan %A Vahid Tarokh %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-yang25b %I PMLR %P 2620--2628 %U https://proceedings.mlr.press/v258/yang25b.html %V 258 %X Regularizing continual learning techniques is important for anticipating algorithmic behavior under new realizations of data. We introduce a new approach to continual learning by imposing the properties of a parabolic partial differential equation (PDE) to regularize the expected behavior of the loss over time. This class of parabolic PDEs has a number of favorable properties that allow us to analyze the error incurred through forgetting and the error induced through generalization. Specifically, we do this through imposing boundary conditions where the boundary is given by a memory buffer. By using the memory buffer as a boundary, we can enforce long term dependencies by bounding the expected error by the boundary loss. Finally, we illustrate the empirical performance of the method on a series of continual learning tasks.
APA
Yang, H., Hasan, A. & Tarokh, V.. (2025). Parabolic Continual Learning. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2620-2628 Available from https://proceedings.mlr.press/v258/yang25b.html.

Related Material