Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning

Prashant Shivaram Bhat, Bharath Chennamkulam Renjith, Elahe Arani, Bahram Zonooz
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:144-160, 2025.

Abstract

Continual learning (CL) remains a significant challenge for deep neural networks, as it is prone to forgetting previously acquired knowledge. Several approaches have been proposed in the literature, such as experience rehearsal, regularization, and parameter isolation, to address this problem. Although almost zero forgetting can be achieved in task-incremental learning, class-incremental learning remains highly challenging due to the problem of inter-task class separation. Limited access to previous task data makes it difficult to discriminate between classes of current and previous tasks. To address this issue, we propose ‘Attention-Guided Incremental Learning’ (AGILE), a novel rehearsal-based CL approach that incorporates compact task attention to effectively reduce interference between tasks. AGILE utilizes lightweight, learnable task projection vectors to transform the latent representations of a shared task attention module toward task distribution. Through extensive empirical evaluation, we show that AGILE significantly improves generalization performance by mitigating task interference and outperforms rehearsal-based approaches in several CL scenarios. Furthermore AGILE can scale well to a large number of tasks with minimal overhead while remaining well-calibrated with reduced task-recency bias.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-bhat25a, title = {Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning}, author = {Bhat, Prashant Shivaram and Renjith, Bharath Chennamkulam and Arani, Elahe and Zonooz, Bahram}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {144--160}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/bhat25a/bhat25a.pdf}, url = {https://proceedings.mlr.press/v274/bhat25a.html}, abstract = {Continual learning (CL) remains a significant challenge for deep neural networks, as it is prone to forgetting previously acquired knowledge. Several approaches have been proposed in the literature, such as experience rehearsal, regularization, and parameter isolation, to address this problem. Although almost zero forgetting can be achieved in task-incremental learning, class-incremental learning remains highly challenging due to the problem of inter-task class separation. Limited access to previous task data makes it difficult to discriminate between classes of current and previous tasks. To address this issue, we propose ‘Attention-Guided Incremental Learning’ (AGILE), a novel rehearsal-based CL approach that incorporates compact task attention to effectively reduce interference between tasks. AGILE utilizes lightweight, learnable task projection vectors to transform the latent representations of a shared task attention module toward task distribution. Through extensive empirical evaluation, we show that AGILE significantly improves generalization performance by mitigating task interference and outperforms rehearsal-based approaches in several CL scenarios. Furthermore AGILE can scale well to a large number of tasks with minimal overhead while remaining well-calibrated with reduced task-recency bias.} }
Endnote
%0 Conference Paper %T Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning %A Prashant Shivaram Bhat %A Bharath Chennamkulam Renjith %A Elahe Arani %A Bahram Zonooz %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-bhat25a %I PMLR %P 144--160 %U https://proceedings.mlr.press/v274/bhat25a.html %V 274 %X Continual learning (CL) remains a significant challenge for deep neural networks, as it is prone to forgetting previously acquired knowledge. Several approaches have been proposed in the literature, such as experience rehearsal, regularization, and parameter isolation, to address this problem. Although almost zero forgetting can be achieved in task-incremental learning, class-incremental learning remains highly challenging due to the problem of inter-task class separation. Limited access to previous task data makes it difficult to discriminate between classes of current and previous tasks. To address this issue, we propose ‘Attention-Guided Incremental Learning’ (AGILE), a novel rehearsal-based CL approach that incorporates compact task attention to effectively reduce interference between tasks. AGILE utilizes lightweight, learnable task projection vectors to transform the latent representations of a shared task attention module toward task distribution. Through extensive empirical evaluation, we show that AGILE significantly improves generalization performance by mitigating task interference and outperforms rehearsal-based approaches in several CL scenarios. Furthermore AGILE can scale well to a large number of tasks with minimal overhead while remaining well-calibrated with reduced task-recency bias.
APA
Bhat, P.S., Renjith, B.C., Arani, E. & Zonooz, B.. (2025). Mitigating Interference in the Knowledge Continuum through Attention-Guided Incremental Learning. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:144-160 Available from https://proceedings.mlr.press/v274/bhat25a.html.

Related Material