Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models

Elif Ceren Gok Yildirim, Murat Onur Yildirim, Joaquin Vanschoren
Conference on Parsimony and Learning, PMLR 328:501-515, 2026.

Abstract

The continual learning literature has rapidly shifted from traditional class-incremental learning (CIL) techniques to foundation model (FM)-based CIL methods without a clear understanding of how these newer approaches compare to strong, lightweight convolutional baselines. This abrupt transition has created a substantial methodological gap, making it difficult to assess whether recent FM-based CIL progress reflects genuine advances or merely the absence of rigorous baselines. To address this gap, we introduce Pruned Adaptation Modules (PAM), a simple yet effective method that freezes the vast majority of the pre-trained ResNet while enabling scalable continual adaptation through sparse task-specific layers. PAM yields up to a 5$\times$ reduction in trainable parameters and a 6$\times$ reduction in total parameters, significantly reducing the cost of continual updates. Across diverse benchmarks, PAM consistently mitigates catastrophic forgetting and outperforms state-of-the-art FM-based CIL approaches. Our findings position PAM as a strong and transparent baseline that helps bridge the gap between traditional and FM-based CIL, guiding future research for a more accurate assessment of true progress in continual adaptation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v328-yildirim26a, title = {Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models}, author = {Yildirim, Elif Ceren Gok and Yildirim, Murat Onur and Vanschoren, Joaquin}, booktitle = {Conference on Parsimony and Learning}, pages = {501--515}, year = {2026}, editor = {Burkholz, Rebekka and Liu, Shiwei and Ravishankar, Saiprasad and Redman, William and Huang, Wei and Su, Weijie and Zhu, Zhihui}, volume = {328}, series = {Proceedings of Machine Learning Research}, month = {23--26 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v328/main/assets/yildirim26a/yildirim26a.pdf}, url = {https://proceedings.mlr.press/v328/yildirim26a.html}, abstract = {The continual learning literature has rapidly shifted from traditional class-incremental learning (CIL) techniques to foundation model (FM)-based CIL methods without a clear understanding of how these newer approaches compare to strong, lightweight convolutional baselines. This abrupt transition has created a substantial methodological gap, making it difficult to assess whether recent FM-based CIL progress reflects genuine advances or merely the absence of rigorous baselines. To address this gap, we introduce Pruned Adaptation Modules (PAM), a simple yet effective method that freezes the vast majority of the pre-trained ResNet while enabling scalable continual adaptation through sparse task-specific layers. PAM yields up to a 5$\times$ reduction in trainable parameters and a 6$\times$ reduction in total parameters, significantly reducing the cost of continual updates. Across diverse benchmarks, PAM consistently mitigates catastrophic forgetting and outperforms state-of-the-art FM-based CIL approaches. Our findings position PAM as a strong and transparent baseline that helps bridge the gap between traditional and FM-based CIL, guiding future research for a more accurate assessment of true progress in continual adaptation.} }
Endnote
%0 Conference Paper %T Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models %A Elif Ceren Gok Yildirim %A Murat Onur Yildirim %A Joaquin Vanschoren %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2026 %E Rebekka Burkholz %E Shiwei Liu %E Saiprasad Ravishankar %E William Redman %E Wei Huang %E Weijie Su %E Zhihui Zhu %F pmlr-v328-yildirim26a %I PMLR %P 501--515 %U https://proceedings.mlr.press/v328/yildirim26a.html %V 328 %X The continual learning literature has rapidly shifted from traditional class-incremental learning (CIL) techniques to foundation model (FM)-based CIL methods without a clear understanding of how these newer approaches compare to strong, lightweight convolutional baselines. This abrupt transition has created a substantial methodological gap, making it difficult to assess whether recent FM-based CIL progress reflects genuine advances or merely the absence of rigorous baselines. To address this gap, we introduce Pruned Adaptation Modules (PAM), a simple yet effective method that freezes the vast majority of the pre-trained ResNet while enabling scalable continual adaptation through sparse task-specific layers. PAM yields up to a 5$\times$ reduction in trainable parameters and a 6$\times$ reduction in total parameters, significantly reducing the cost of continual updates. Across diverse benchmarks, PAM consistently mitigates catastrophic forgetting and outperforms state-of-the-art FM-based CIL approaches. Our findings position PAM as a strong and transparent baseline that helps bridge the gap between traditional and FM-based CIL, guiding future research for a more accurate assessment of true progress in continual adaptation.
APA
Yildirim, E.C.G., Yildirim, M.O. & Vanschoren, J.. (2026). Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 328:501-515 Available from https://proceedings.mlr.press/v328/yildirim26a.html.

Related Material