[edit]
Preserving Plasticity in Continual Learning with Adaptive Linearity Injection
Proceedings of The 4th Conference on Lifelong Learning Agents, PMLR 330:418-444, 2026.
Abstract
Loss of plasticity in deep neural networks is the gradual reduction in a model’s capacity to incrementally learn and has been identified as a key obstacle to learning in non-stationary problem settings. Recent work has shown that deep linear networks tend to be resilient towards loss of plasticity. Motivated by this observation, we propose $\textbf{Ada}$ptive $\textbf{Lin}$earization ($\texttt{AdaLin}$), a general approach that dynamically adapts each neuron’s activation function to mitigate plasticity loss. Unlike prior methods that rely on regularization or periodic resets, $\texttt{AdaLin}$ equips every neuron with a learnable parameter and a gating mechanism that injects linearity into the activation function based on its gradient flow. This adaptive modulation ensures sufficient gradient signal and sustains continual learning without introducing additional hyperparameters or requiring explicit task boundaries. When used with conventional activation functions like ReLU and Tanh, we demonstrate that $\texttt{AdaLin}$ can significantly improve the performance on standard benchmarks, including Random Label and Permuted MNIST, Random Label and Shuffled CIFAR 10, and Class-Split CIFAR 100. Our findings show that a per-neuron PReLU, as recovered by $\texttt{AdaLin}$ with ReLU, is surprisingly effective in mitigating plasticity loss. We also perform a systematic set of ablations that show that neuron-level adaptation is crucial for good performance, and analyze a number of metrics in the network that might be correlated to loss of plasticity. Our code is publicly available at: https://github.com/RoozbehRazavi/AdaLin.git