Does Alignment help continual learning?

Anurag Daram, Dhireesha Kudithipudi
Proceedings of The Workshop on Classifier Learning from Difficult Data, PMLR 263:48-55, 2024.

Abstract

Backpropagation relies on instantaneous weight transport and global updates, thus questioning its neural plausibility. Continual learning mechanisms that are largely biologically inspired employ backpropagation as the baseline training rule. In this work, we examine the role of learning rules that avoid the weight transport problem in the context of continual learning. We investigate weight estimation approaches that use linear combinations of local and non-local regularization primitives for alignment-based learning. We couple these approaches with parameter regularization and replay mechanisms to demonstrate robust continual learning capabilities. We show that the layer-wise operations observed in alignment-based learning help to boost performance. We evaluated the proposed models in complex task-aware and task-free scenarios on multiple image classification datasets. We study the dynamics of representational similarity for learning rules compared to backpropagation. Lastly, we provide a mapping of the representational similarity to the knowledge preservation capabilities of the models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v263-daram24a, title = {Does Alignment help continual learning?}, author = {Daram, Anurag and Kudithipudi, Dhireesha}, booktitle = {Proceedings of The Workshop on Classifier Learning from Difficult Data}, pages = {48--55}, year = {2024}, editor = {Zyblewski, Pawel and Grana, Manuel and Pawel, Ksieniewicz and Minku, Leandro}, volume = {263}, series = {Proceedings of Machine Learning Research}, month = {19--20 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v263/main/assets/daram24a/daram24a.pdf}, url = {https://proceedings.mlr.press/v263/daram24a.html}, abstract = {Backpropagation relies on instantaneous weight transport and global updates, thus questioning its neural plausibility. Continual learning mechanisms that are largely biologically inspired employ backpropagation as the baseline training rule. In this work, we examine the role of learning rules that avoid the weight transport problem in the context of continual learning. We investigate weight estimation approaches that use linear combinations of local and non-local regularization primitives for alignment-based learning. We couple these approaches with parameter regularization and replay mechanisms to demonstrate robust continual learning capabilities. We show that the layer-wise operations observed in alignment-based learning help to boost performance. We evaluated the proposed models in complex task-aware and task-free scenarios on multiple image classification datasets. We study the dynamics of representational similarity for learning rules compared to backpropagation. Lastly, we provide a mapping of the representational similarity to the knowledge preservation capabilities of the models.} }
Endnote
%0 Conference Paper %T Does Alignment help continual learning? %A Anurag Daram %A Dhireesha Kudithipudi %B Proceedings of The Workshop on Classifier Learning from Difficult Data %C Proceedings of Machine Learning Research %D 2024 %E Pawel Zyblewski %E Manuel Grana %E Ksieniewicz Pawel %E Leandro Minku %F pmlr-v263-daram24a %I PMLR %P 48--55 %U https://proceedings.mlr.press/v263/daram24a.html %V 263 %X Backpropagation relies on instantaneous weight transport and global updates, thus questioning its neural plausibility. Continual learning mechanisms that are largely biologically inspired employ backpropagation as the baseline training rule. In this work, we examine the role of learning rules that avoid the weight transport problem in the context of continual learning. We investigate weight estimation approaches that use linear combinations of local and non-local regularization primitives for alignment-based learning. We couple these approaches with parameter regularization and replay mechanisms to demonstrate robust continual learning capabilities. We show that the layer-wise operations observed in alignment-based learning help to boost performance. We evaluated the proposed models in complex task-aware and task-free scenarios on multiple image classification datasets. We study the dynamics of representational similarity for learning rules compared to backpropagation. Lastly, we provide a mapping of the representational similarity to the knowledge preservation capabilities of the models.
APA
Daram, A. & Kudithipudi, D.. (2024). Does Alignment help continual learning?. Proceedings of The Workshop on Classifier Learning from Difficult Data, in Proceedings of Machine Learning Research 263:48-55 Available from https://proceedings.mlr.press/v263/daram24a.html.

Related Material