Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers

Filip Szatkowski, Yaoyue Zheng, Fei Yang, Tomasz Trzcinski, Bartłomiej Twardowski, Joost Van De Weijer
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:58106-58141, 2025.

Abstract

Continual learning is crucial for applying machine learning in challenging, dynamic, and often resource-constrained environments. However, catastrophic forgetting — overwriting previously learned knowledge when new information is acquired — remains a major challenge. In this work, we examine the intermediate representations in neural network layers during continual learning and find that such representations are less prone to forgetting, highlighting their potential to accelerate computation. Motivated by these findings, we propose to use auxiliary classifiers (ACs) to enhance performance and demonstrate that integrating ACs into various continual learning methods consistently improves accuracy across diverse evaluation settings, yielding an average 10% relative gain. We also leverage the ACs to reduce the average cost of the inference by 10-60% without compromising accuracy, enabling the model to return the predictions before computing all the layers. Our approach provides a scalable and efficient solution for continual learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-szatkowski25a, title = {Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers}, author = {Szatkowski, Filip and Zheng, Yaoyue and Yang, Fei and Trzcinski, Tomasz and Twardowski, Bart{\l}omiej and Van De Weijer, Joost}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {58106--58141}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/szatkowski25a/szatkowski25a.pdf}, url = {https://proceedings.mlr.press/v267/szatkowski25a.html}, abstract = {Continual learning is crucial for applying machine learning in challenging, dynamic, and often resource-constrained environments. However, catastrophic forgetting — overwriting previously learned knowledge when new information is acquired — remains a major challenge. In this work, we examine the intermediate representations in neural network layers during continual learning and find that such representations are less prone to forgetting, highlighting their potential to accelerate computation. Motivated by these findings, we propose to use auxiliary classifiers (ACs) to enhance performance and demonstrate that integrating ACs into various continual learning methods consistently improves accuracy across diverse evaluation settings, yielding an average 10% relative gain. We also leverage the ACs to reduce the average cost of the inference by 10-60% without compromising accuracy, enabling the model to return the predictions before computing all the layers. Our approach provides a scalable and efficient solution for continual learning.} }
Endnote
%0 Conference Paper %T Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers %A Filip Szatkowski %A Yaoyue Zheng %A Fei Yang %A Tomasz Trzcinski %A Bartłomiej Twardowski %A Joost Van De Weijer %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-szatkowski25a %I PMLR %P 58106--58141 %U https://proceedings.mlr.press/v267/szatkowski25a.html %V 267 %X Continual learning is crucial for applying machine learning in challenging, dynamic, and often resource-constrained environments. However, catastrophic forgetting — overwriting previously learned knowledge when new information is acquired — remains a major challenge. In this work, we examine the intermediate representations in neural network layers during continual learning and find that such representations are less prone to forgetting, highlighting their potential to accelerate computation. Motivated by these findings, we propose to use auxiliary classifiers (ACs) to enhance performance and demonstrate that integrating ACs into various continual learning methods consistently improves accuracy across diverse evaluation settings, yielding an average 10% relative gain. We also leverage the ACs to reduce the average cost of the inference by 10-60% without compromising accuracy, enabling the model to return the predictions before computing all the layers. Our approach provides a scalable and efficient solution for continual learning.
APA
Szatkowski, F., Zheng, Y., Yang, F., Trzcinski, T., Twardowski, B. & Van De Weijer, J.. (2025). Improving Continual Learning Performance and Efficiency with Auxiliary Classifiers. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:58106-58141 Available from https://proceedings.mlr.press/v267/szatkowski25a.html.

Related Material