Low-Rank Adaptations for increased Generalization in Foundation Model features

Vilde Schulerud Bøe, Andreas Kleppe, Sebastian Foersch, Daniel-Christoph Wagner, Lill-Tove Rasmussen Busund, Adín Ramírez Rivera
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 316:234-247, 2026.

Abstract

For foundation models (FMs) to truly advance computational pathology, they must deliver consistent and reliable predictions under diverse, unseen test conditions. Without such robustness, clinical trust and widespread adoption remain out of reach. Although many FMs for histopathology now exist, they have to our knowledge not been systematically tested for robustness by external researchers on independent datasets. In this study, we evaluate the robustness of foundation model features on three separate histopathology datasets and find that their performance drops on external data. Our analysis also reveals that these models often encode dataset-specific information, limiting their generalizability. To address this issue, we train a Weight-Decomposed Low-Rank Adaptation (DoRA) with strong data augmentations to improve feature robustness. Our experiments show that models trained with this adapter exhibit fewer signs of dataset-specific information and may generate more robust features across domains. These results highlight the need for robustness testing and encourage incorporating robustness considerations into the development, training, and tuning of FMs for histopathology. The code for this work will be available at https://github.com/dsb-ifi/DoRA-for-FM-robustness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v316-boe26a, title = {Low-Rank Adaptations for increased Generalization in Foundation Model features}, author = {B{\o}e, Vilde Schulerud and Kleppe, Andreas and Foersch, Sebastian and Wagner, Daniel-Christoph and Busund, Lill-Tove Rasmussen and Rivera, Ad\'{i}n Ram\'{i}rez}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {234--247}, year = {2026}, editor = {Studer, Linda and Ciompi, Francesco and Khalili, Nadieh and Faryna, Khrystyna and Faryna, Khrystyna and Yeong, Joe and Lau, Mai Chan and Chen, Hao and Liu, Ziyi and Brattoli, Biagio}, volume = {316}, series = {Proceedings of Machine Learning Research}, month = {27 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v316/main/assets/boe26a/boe26a.pdf}, url = {https://proceedings.mlr.press/v316/boe26a.html}, abstract = {For foundation models (FMs) to truly advance computational pathology, they must deliver consistent and reliable predictions under diverse, unseen test conditions. Without such robustness, clinical trust and widespread adoption remain out of reach. Although many FMs for histopathology now exist, they have to our knowledge not been systematically tested for robustness by external researchers on independent datasets. In this study, we evaluate the robustness of foundation model features on three separate histopathology datasets and find that their performance drops on external data. Our analysis also reveals that these models often encode dataset-specific information, limiting their generalizability. To address this issue, we train a Weight-Decomposed Low-Rank Adaptation (DoRA) with strong data augmentations to improve feature robustness. Our experiments show that models trained with this adapter exhibit fewer signs of dataset-specific information and may generate more robust features across domains. These results highlight the need for robustness testing and encourage incorporating robustness considerations into the development, training, and tuning of FMs for histopathology. The code for this work will be available at https://github.com/dsb-ifi/DoRA-for-FM-robustness.} }
Endnote
%0 Conference Paper %T Low-Rank Adaptations for increased Generalization in Foundation Model features %A Vilde Schulerud Bøe %A Andreas Kleppe %A Sebastian Foersch %A Daniel-Christoph Wagner %A Lill-Tove Rasmussen Busund %A Adín Ramírez Rivera %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2026 %E Linda Studer %E Francesco Ciompi %E Nadieh Khalili %E Khrystyna Faryna %E Khrystyna Faryna %E Joe Yeong %E Mai Chan Lau %E Hao Chen %E Ziyi Liu %E Biagio Brattoli %F pmlr-v316-boe26a %I PMLR %P 234--247 %U https://proceedings.mlr.press/v316/boe26a.html %V 316 %X For foundation models (FMs) to truly advance computational pathology, they must deliver consistent and reliable predictions under diverse, unseen test conditions. Without such robustness, clinical trust and widespread adoption remain out of reach. Although many FMs for histopathology now exist, they have to our knowledge not been systematically tested for robustness by external researchers on independent datasets. In this study, we evaluate the robustness of foundation model features on three separate histopathology datasets and find that their performance drops on external data. Our analysis also reveals that these models often encode dataset-specific information, limiting their generalizability. To address this issue, we train a Weight-Decomposed Low-Rank Adaptation (DoRA) with strong data augmentations to improve feature robustness. Our experiments show that models trained with this adapter exhibit fewer signs of dataset-specific information and may generate more robust features across domains. These results highlight the need for robustness testing and encourage incorporating robustness considerations into the development, training, and tuning of FMs for histopathology. The code for this work will be available at https://github.com/dsb-ifi/DoRA-for-FM-robustness.
APA
Bøe, V.S., Kleppe, A., Foersch, S., Wagner, D., Busund, L.R. & Rivera, A.R.. (2026). Low-Rank Adaptations for increased Generalization in Foundation Model features. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 316:234-247 Available from https://proceedings.mlr.press/v316/boe26a.html.

Related Material