Preserving Plasticity in Continual Learning with Adaptive Linearity Injection

Seyed Roozbeh Razavi Rohani, Khashayar Khajavi, Wesley Chung, Mo Chen, Sharan Vaswani
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:418-444, 2026.

Abstract

Loss of plasticity in deep neural networks is the gradual reduction in a model’s capacity to incrementally learn and has been identified as a key obstacle to learning in non-stationary problem settings. Recent work has shown that deep linear networks tend to be resilient towards loss of plasticity. Motivated by this observation, we propose $\textbf{Ada}$ptive $\textbf{Lin}$earization ($\texttt{AdaLin}$), a general approach that dynamically adapts each neuron’s activation function to mitigate plasticity loss. Unlike prior methods that rely on regularization or periodic resets, $\texttt{AdaLin}$ equips every neuron with a learnable parameter and a gating mechanism that injects linearity into the activation function based on its gradient flow. This adaptive modulation ensures sufficient gradient signal and sustains continual learning without introducing additional hyperparameters or requiring explicit task boundaries. When used with conventional activation functions like ReLU and Tanh, we demonstrate that $\texttt{AdaLin}$ can significantly improve the performance on standard benchmarks, including Random Label and Permuted MNIST, Random Label and Shuffled CIFAR 10, and Class-Split CIFAR 100. Our findings show that a per-neuron PReLU, as recovered by $\texttt{AdaLin}$ with ReLU, is surprisingly effective in mitigating plasticity loss. We also perform a systematic set of ablations that show that neuron-level adaptation is crucial for good performance, and analyze a number of metrics in the network that might be correlated to loss of plasticity. Our code is publicly available at: https://github.com/RoozbehRazavi/AdaLin.git

Cite this Paper


BibTeX
@InProceedings{pmlr-v330-rohani26a, title = {Preserving Plasticity in Continual Learning with Adaptive Linearity Injection}, author = {Rohani, Seyed Roozbeh Razavi and Khajavi, Khashayar and Chung, Wesley and Chen, Mo and Vaswani, Sharan}, booktitle = {Proceedings of The 4th Conference on Lifelong Learning Agents}, pages = {418--444}, year = {2026}, editor = {Chandar, Sarath and Pascanu, Razvan and Eaton, Eric and Liu, Bing and Mahmood, Rupam and Rannen-Triki, Amal}, volume = {330}, series = {Proceedings of Machine Learning Research}, month = {11--14 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v330/main/assets/rohani26a/rohani26a.pdf}, url = {https://proceedings.mlr.press/v330/rohani26a.html}, abstract = {Loss of plasticity in deep neural networks is the gradual reduction in a model’s capacity to incrementally learn and has been identified as a key obstacle to learning in non-stationary problem settings. Recent work has shown that deep linear networks tend to be resilient towards loss of plasticity. Motivated by this observation, we propose $\textbf{Ada}$ptive $\textbf{Lin}$earization ($\texttt{AdaLin}$), a general approach that dynamically adapts each neuron’s activation function to mitigate plasticity loss. Unlike prior methods that rely on regularization or periodic resets, $\texttt{AdaLin}$ equips every neuron with a learnable parameter and a gating mechanism that injects linearity into the activation function based on its gradient flow. This adaptive modulation ensures sufficient gradient signal and sustains continual learning without introducing additional hyperparameters or requiring explicit task boundaries. When used with conventional activation functions like ReLU and Tanh, we demonstrate that $\texttt{AdaLin}$ can significantly improve the performance on standard benchmarks, including Random Label and Permuted MNIST, Random Label and Shuffled CIFAR 10, and Class-Split CIFAR 100. Our findings show that a per-neuron PReLU, as recovered by $\texttt{AdaLin}$ with ReLU, is surprisingly effective in mitigating plasticity loss. We also perform a systematic set of ablations that show that neuron-level adaptation is crucial for good performance, and analyze a number of metrics in the network that might be correlated to loss of plasticity. Our code is publicly available at: https://github.com/RoozbehRazavi/AdaLin.git} }
Endnote
%0 Conference Paper %T Preserving Plasticity in Continual Learning with Adaptive Linearity Injection %A Seyed Roozbeh Razavi Rohani %A Khashayar Khajavi %A Wesley Chung %A Mo Chen %A Sharan Vaswani %B Proceedings of The 4th Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2026 %E Sarath Chandar %E Razvan Pascanu %E Eric Eaton %E Bing Liu %E Rupam Mahmood %E Amal Rannen-Triki %F pmlr-v330-rohani26a %I PMLR %P 418--444 %U https://proceedings.mlr.press/v330/rohani26a.html %V 330 %X Loss of plasticity in deep neural networks is the gradual reduction in a model’s capacity to incrementally learn and has been identified as a key obstacle to learning in non-stationary problem settings. Recent work has shown that deep linear networks tend to be resilient towards loss of plasticity. Motivated by this observation, we propose $\textbf{Ada}$ptive $\textbf{Lin}$earization ($\texttt{AdaLin}$), a general approach that dynamically adapts each neuron’s activation function to mitigate plasticity loss. Unlike prior methods that rely on regularization or periodic resets, $\texttt{AdaLin}$ equips every neuron with a learnable parameter and a gating mechanism that injects linearity into the activation function based on its gradient flow. This adaptive modulation ensures sufficient gradient signal and sustains continual learning without introducing additional hyperparameters or requiring explicit task boundaries. When used with conventional activation functions like ReLU and Tanh, we demonstrate that $\texttt{AdaLin}$ can significantly improve the performance on standard benchmarks, including Random Label and Permuted MNIST, Random Label and Shuffled CIFAR 10, and Class-Split CIFAR 100. Our findings show that a per-neuron PReLU, as recovered by $\texttt{AdaLin}$ with ReLU, is surprisingly effective in mitigating plasticity loss. We also perform a systematic set of ablations that show that neuron-level adaptation is crucial for good performance, and analyze a number of metrics in the network that might be correlated to loss of plasticity. Our code is publicly available at: https://github.com/RoozbehRazavi/AdaLin.git
APA
Rohani, S.R.R., Khajavi, K., Chung, W., Chen, M. & Vaswani, S.. (2026). Preserving Plasticity in Continual Learning with Adaptive Linearity Injection. Proceedings of The 4th Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 330:418-444 Available from https://proceedings.mlr.press/v330/rohani26a.html.

Related Material