Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks

Rouzbeh Meshkinnejad, Jie Mei, Zeduo Zhang, Daniel J Lizotte, Yalda Mohsenzadeh
Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, PMLR 285:152-169, 2024.

Abstract

Contrastive representation learning has emerged as a promising technique for continual learning as it can learn representations that are robust to catastrophic forgetting and generalize well to unseen future tasks. Previous work in continual learning has addressed forgetting by using previous task data and trained models. Inspired by event models created and updated in the brain, we propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts. By observing the redundancy-inducing ability of contrastive loss on the output of a neural network, our method leverages the first few samples of the new task to identify and retain parameters contributing most to the transfer ability of the neural network, freeing up the remaining parts of the network to learn new features. We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet and demonstrate state-of-the-art performance in task-incremental, class-incremental, and domain-incremental continual learning scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v285-meshkinnejad24a, title = {Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks}, author = {Meshkinnejad, Rouzbeh and Mei, Jie and Zhang, Zeduo and Lizotte, Daniel J and Mohsenzadeh, Yalda}, booktitle = {Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models}, pages = {152--169}, year = {2024}, editor = {Fumero, Marco and Domine, Clementine and Lähner, Zorah and Crisostomi, Donato and Moschella, Luca and Stachenfeld, Kimberly}, volume = {285}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v285/main/assets/meshkinnejad24a/meshkinnejad24a.pdf}, url = {https://proceedings.mlr.press/v285/meshkinnejad24a.html}, abstract = {Contrastive representation learning has emerged as a promising technique for continual learning as it can learn representations that are robust to catastrophic forgetting and generalize well to unseen future tasks. Previous work in continual learning has addressed forgetting by using previous task data and trained models. Inspired by event models created and updated in the brain, we propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts. By observing the redundancy-inducing ability of contrastive loss on the output of a neural network, our method leverages the first few samples of the new task to identify and retain parameters contributing most to the transfer ability of the neural network, freeing up the remaining parts of the network to learn new features. We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet and demonstrate state-of-the-art performance in task-incremental, class-incremental, and domain-incremental continual learning scenarios.} }
Endnote
%0 Conference Paper %T Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks %A Rouzbeh Meshkinnejad %A Jie Mei %A Zeduo Zhang %A Daniel J Lizotte %A Yalda Mohsenzadeh %B Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models %C Proceedings of Machine Learning Research %D 2024 %E Marco Fumero %E Clementine Domine %E Zorah Lähner %E Donato Crisostomi %E Luca Moschella %E Kimberly Stachenfeld %F pmlr-v285-meshkinnejad24a %I PMLR %P 152--169 %U https://proceedings.mlr.press/v285/meshkinnejad24a.html %V 285 %X Contrastive representation learning has emerged as a promising technique for continual learning as it can learn representations that are robust to catastrophic forgetting and generalize well to unseen future tasks. Previous work in continual learning has addressed forgetting by using previous task data and trained models. Inspired by event models created and updated in the brain, we propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts. By observing the redundancy-inducing ability of contrastive loss on the output of a neural network, our method leverages the first few samples of the new task to identify and retain parameters contributing most to the transfer ability of the neural network, freeing up the remaining parts of the network to learn new features. We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet and demonstrate state-of-the-art performance in task-incremental, class-incremental, and domain-incremental continual learning scenarios.
APA
Meshkinnejad, R., Mei, J., Zhang, Z., Lizotte, D.J. & Mohsenzadeh, Y.. (2024). Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks. Proceedings of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models, in Proceedings of Machine Learning Research 285:152-169 Available from https://proceedings.mlr.press/v285/meshkinnejad24a.html.

Related Material