[edit]
Overcoming catastrophic forgetting with classifier expander
Proceedings of the 15th Asian Conference on Machine Learning, PMLR 222:803-817, 2024.
Abstract
It is essential for models to gradually adapt to the world’s increasing complexity, and we can use models more effectively if they keep up with the times. However, the technique of continuous learning (CL) has an issue with catastrophic forgetting, and effective continuous learning methods can only be attained by effectively limiting forgetting and learning new tasks. In this study, we offer the Classifier Expander(CE) method, which combines the regularization-based and replay-based approaches. By undergoing two stages of training, it fulfills the aforementioned standards. The training content for the new task is limited to the portion of the network relevant to that task in the first stage, which uses the replay approach to reduce the forgetting problem. This strategy minimizes disruption to the old task while facilitating efficient learning of the new one. Utilizing all of the data available, the second stage retrains the network and sufficiently trains the classifier to balance the learning performance of the old and new tasks. Our method regularly outperforms previous CL methods on the CIFAR-100 and CUB-200 datasets, obtaining an average improvement of 2.94% on the class-incremental learning and 1.16% on the task-incremental learning compared to the best method currently available. Our code is available at https://github.com/EmbraceTomorrow/CE.