Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning

Depeng Li, Tianqi Wang, Junwei Chen, Wei Dai, Zhigang Zeng
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:28688-28705, 2024.

Abstract

Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones. In this paper, we propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL. In each training session, it introduces a supervisory mechanism to guide network expansion whose growth size is compactly commensurate with the intrinsic complexity of a newly arriving task. This constructs a near-minimal network while allowing the model to expand its capacity when cannot sufficiently hold new classes. At inference time, it automatically reactivates the required neural units to retrieve knowledge and leaves the remaining inactivated to prevent interference. We name our model AutoActivator, which is effective and scalable. To gain insights into the neural unit dynamics, we theoretically analyze the model’s convergence property via a universal approximation theorem on learning sequential mappings, which is under-explored in the CIL community. Experiments show that our method achieves strong CIL performance in rehearsal-free and minimal-expansion settings with different backbones.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-li24bk, title = {Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning}, author = {Li, Depeng and Wang, Tianqi and Chen, Junwei and Dai, Wei and Zeng, Zhigang}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {28688--28705}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/li24bk/li24bk.pdf}, url = {https://proceedings.mlr.press/v235/li24bk.html}, abstract = {Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones. In this paper, we propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL. In each training session, it introduces a supervisory mechanism to guide network expansion whose growth size is compactly commensurate with the intrinsic complexity of a newly arriving task. This constructs a near-minimal network while allowing the model to expand its capacity when cannot sufficiently hold new classes. At inference time, it automatically reactivates the required neural units to retrieve knowledge and leaves the remaining inactivated to prevent interference. We name our model AutoActivator, which is effective and scalable. To gain insights into the neural unit dynamics, we theoretically analyze the model’s convergence property via a universal approximation theorem on learning sequential mappings, which is under-explored in the CIL community. Experiments show that our method achieves strong CIL performance in rehearsal-free and minimal-expansion settings with different backbones.} }
Endnote
%0 Conference Paper %T Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning %A Depeng Li %A Tianqi Wang %A Junwei Chen %A Wei Dai %A Zhigang Zeng %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-li24bk %I PMLR %P 28688--28705 %U https://proceedings.mlr.press/v235/li24bk.html %V 235 %X Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones. In this paper, we propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL. In each training session, it introduces a supervisory mechanism to guide network expansion whose growth size is compactly commensurate with the intrinsic complexity of a newly arriving task. This constructs a near-minimal network while allowing the model to expand its capacity when cannot sufficiently hold new classes. At inference time, it automatically reactivates the required neural units to retrieve knowledge and leaves the remaining inactivated to prevent interference. We name our model AutoActivator, which is effective and scalable. To gain insights into the neural unit dynamics, we theoretically analyze the model’s convergence property via a universal approximation theorem on learning sequential mappings, which is under-explored in the CIL community. Experiments show that our method achieves strong CIL performance in rehearsal-free and minimal-expansion settings with different backbones.
APA
Li, D., Wang, T., Chen, J., Dai, W. & Zeng, Z.. (2024). Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:28688-28705 Available from https://proceedings.mlr.press/v235/li24bk.html.

Related Material