Overcoming catastrophic forgetting with classifier expander

Xinchen Liu, Hongbo Wang, Yingjian Tian, Linyao Xie
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:803-817, 2024.

Abstract

It is essential for models to gradually adapt to the world’s increasing complexity, and we can use models more effectively if they keep up with the times. However, the technique of continuous learning (CL) has an issue with catastrophic forgetting, and effective continuous learning methods can only be attained by effectively limiting forgetting and learning new tasks. In this study, we offer the Classifier Expander(CE) method, which combines the regularization-based and replay-based approaches. By undergoing two stages of training, it fulfills the aforementioned standards. The training content for the new task is limited to the portion of the network relevant to that task in the first stage, which uses the replay approach to reduce the forgetting problem. This strategy minimizes disruption to the old task while facilitating efficient learning of the new one. Utilizing all of the data available, the second stage retrains the network and sufficiently trains the classifier to balance the learning performance of the old and new tasks. Our method regularly outperforms previous CL methods on the CIFAR-100 and CUB-200 datasets, obtaining an average improvement of 2.94% on the class-incremental learning and 1.16% on the task-incremental learning compared to the best method currently available. Our code is available at https://github.com/EmbraceTomorrow/CE.

Cite this Paper


BibTeX
@InProceedings{pmlr-v222-liu24b, title = {Overcoming catastrophic forgetting with classifier expander}, author = {Liu, Xinchen and Wang, Hongbo and Tian, Yingjian and Xie, Linyao}, booktitle = {Proceedings of the 15th Asian Conference on Machine Learning}, pages = {803--817}, year = {2024}, editor = {Yanıkoğlu, Berrin and Buntine, Wray}, volume = {222}, series = {Proceedings of Machine Learning Research}, month = {11--14 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v222/liu24b/liu24b.pdf}, url = {https://proceedings.mlr.press/v222/liu24b.html}, abstract = {It is essential for models to gradually adapt to the world’s increasing complexity, and we can use models more effectively if they keep up with the times. However, the technique of continuous learning (CL) has an issue with catastrophic forgetting, and effective continuous learning methods can only be attained by effectively limiting forgetting and learning new tasks. In this study, we offer the Classifier Expander(CE) method, which combines the regularization-based and replay-based approaches. By undergoing two stages of training, it fulfills the aforementioned standards. The training content for the new task is limited to the portion of the network relevant to that task in the first stage, which uses the replay approach to reduce the forgetting problem. This strategy minimizes disruption to the old task while facilitating efficient learning of the new one. Utilizing all of the data available, the second stage retrains the network and sufficiently trains the classifier to balance the learning performance of the old and new tasks. Our method regularly outperforms previous CL methods on the CIFAR-100 and CUB-200 datasets, obtaining an average improvement of 2.94% on the class-incremental learning and 1.16% on the task-incremental learning compared to the best method currently available. Our code is available at https://github.com/EmbraceTomorrow/CE.} }
Endnote
%0 Conference Paper %T Overcoming catastrophic forgetting with classifier expander %A Xinchen Liu %A Hongbo Wang %A Yingjian Tian %A Linyao Xie %B Proceedings of the 15th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Berrin Yanıkoğlu %E Wray Buntine %F pmlr-v222-liu24b %I PMLR %P 803--817 %U https://proceedings.mlr.press/v222/liu24b.html %V 222 %X It is essential for models to gradually adapt to the world’s increasing complexity, and we can use models more effectively if they keep up with the times. However, the technique of continuous learning (CL) has an issue with catastrophic forgetting, and effective continuous learning methods can only be attained by effectively limiting forgetting and learning new tasks. In this study, we offer the Classifier Expander(CE) method, which combines the regularization-based and replay-based approaches. By undergoing two stages of training, it fulfills the aforementioned standards. The training content for the new task is limited to the portion of the network relevant to that task in the first stage, which uses the replay approach to reduce the forgetting problem. This strategy minimizes disruption to the old task while facilitating efficient learning of the new one. Utilizing all of the data available, the second stage retrains the network and sufficiently trains the classifier to balance the learning performance of the old and new tasks. Our method regularly outperforms previous CL methods on the CIFAR-100 and CUB-200 datasets, obtaining an average improvement of 2.94% on the class-incremental learning and 1.16% on the task-incremental learning compared to the best method currently available. Our code is available at https://github.com/EmbraceTomorrow/CE.
APA
Liu, X., Wang, H., Tian, Y. & Xie, L.. (2024). Overcoming catastrophic forgetting with classifier expander. Proceedings of the 15th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 222:803-817 Available from https://proceedings.mlr.press/v222/liu24b.html.

Related Material