WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images

Hong Liu, Haosen Yang, Paul J. van Diest, Josien P.W. Pluim, Mitko Veta
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 254:25-37, 2024.

Abstract

The Segment Anything Model (SAM) marks a significant advancement in segmentation models, offering robust zero-shot abilities and dynamic prompting. However, existing medical SAMs are not suitable for the multi-scale nature of whole-slide images (WSIs), restricting their effectiveness. To resolve this drawback, we present WSI-SAM, enhancing SAM with precise object segmentation capabilities for histopathology images using multi-resolution patches, while preserving its efficient, prompt-driven design, and zero-shot abilities. To fully exploit pretrained knowledge while minimizing training overhead, we keep SAM frozen, introducing only minimal extra parameters and computational overhead. In particular, we introduce High-Resolution (HR) token, Low-Resolution (LR) token and dual mask decoder. This decoder integrates the original SAM mask decoder with a lightweight fusion module that integrates features at multiple scales. Instead of predicting a mask independently, we integrate HR and LR token at intermediate layer to jointly learn features of the same object across multiple resolutions. Experiments show that our WSI-SAM outperforms state-of-the-art SAM and its variants. In particular, our model outperforms SAM by 4.1 and 2.5 percent points on a ductal carcinoma in situ (DCIS) segmentation tasks and breast cancer metastasis segmentation task (CAMELYON16 data set). The code will be available at https://github.com/HongLiuuuuu/WSI-SAM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v254-liu24a, title = {WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images}, author = {Liu, Hong and Yang, Haosen and Diest, Paul J. van and Pluim, Josien P.W. and Veta, Mitko}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {25--37}, year = {2024}, editor = {Ciompi, Francesco and Khalili, Nadieh and Studer, Linda and Poceviciute, Milda and Khan, Amjad and Veta, Mitko and Jiao, Yiping and Haj-Hosseini, Neda and Chen, Hao and Raza, Shan and Minhas, FayyazZlobec, Inti and Burlutskiy, Nikolay and Vilaplana, Veronica and Brattoli, Biagio and Muller, Henning and Atzori, Manfredo and Raza, Shan and Minhas, Fayyaz}, volume = {254}, series = {Proceedings of Machine Learning Research}, month = {06 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v254/main/assets/liu24a/liu24a.pdf}, url = {https://proceedings.mlr.press/v254/liu24a.html}, abstract = {The Segment Anything Model (SAM) marks a significant advancement in segmentation models, offering robust zero-shot abilities and dynamic prompting. However, existing medical SAMs are not suitable for the multi-scale nature of whole-slide images (WSIs), restricting their effectiveness. To resolve this drawback, we present WSI-SAM, enhancing SAM with precise object segmentation capabilities for histopathology images using multi-resolution patches, while preserving its efficient, prompt-driven design, and zero-shot abilities. To fully exploit pretrained knowledge while minimizing training overhead, we keep SAM frozen, introducing only minimal extra parameters and computational overhead. In particular, we introduce High-Resolution (HR) token, Low-Resolution (LR) token and dual mask decoder. This decoder integrates the original SAM mask decoder with a lightweight fusion module that integrates features at multiple scales. Instead of predicting a mask independently, we integrate HR and LR token at intermediate layer to jointly learn features of the same object across multiple resolutions. Experiments show that our WSI-SAM outperforms state-of-the-art SAM and its variants. In particular, our model outperforms SAM by 4.1 and 2.5 percent points on a ductal carcinoma in situ (DCIS) segmentation tasks and breast cancer metastasis segmentation task (CAMELYON16 data set). The code will be available at https://github.com/HongLiuuuuu/WSI-SAM.} }
Endnote
%0 Conference Paper %T WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images %A Hong Liu %A Haosen Yang %A Paul J. van Diest %A Josien P.W. Pluim %A Mitko Veta %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2024 %E Francesco Ciompi %E Nadieh Khalili %E Linda Studer %E Milda Poceviciute %E Amjad Khan %E Mitko Veta %E Yiping Jiao %E Neda Haj-Hosseini %E Hao Chen %E Shan Raza %E Fayyaz MinhasInti Zlobec %E Nikolay Burlutskiy %E Veronica Vilaplana %E Biagio Brattoli %E Henning Muller %E Manfredo Atzori %E Shan Raza %E Fayyaz Minhas %F pmlr-v254-liu24a %I PMLR %P 25--37 %U https://proceedings.mlr.press/v254/liu24a.html %V 254 %X The Segment Anything Model (SAM) marks a significant advancement in segmentation models, offering robust zero-shot abilities and dynamic prompting. However, existing medical SAMs are not suitable for the multi-scale nature of whole-slide images (WSIs), restricting their effectiveness. To resolve this drawback, we present WSI-SAM, enhancing SAM with precise object segmentation capabilities for histopathology images using multi-resolution patches, while preserving its efficient, prompt-driven design, and zero-shot abilities. To fully exploit pretrained knowledge while minimizing training overhead, we keep SAM frozen, introducing only minimal extra parameters and computational overhead. In particular, we introduce High-Resolution (HR) token, Low-Resolution (LR) token and dual mask decoder. This decoder integrates the original SAM mask decoder with a lightweight fusion module that integrates features at multiple scales. Instead of predicting a mask independently, we integrate HR and LR token at intermediate layer to jointly learn features of the same object across multiple resolutions. Experiments show that our WSI-SAM outperforms state-of-the-art SAM and its variants. In particular, our model outperforms SAM by 4.1 and 2.5 percent points on a ductal carcinoma in situ (DCIS) segmentation tasks and breast cancer metastasis segmentation task (CAMELYON16 data set). The code will be available at https://github.com/HongLiuuuuu/WSI-SAM.
APA
Liu, H., Yang, H., Diest, P.J.v., Pluim, J.P. & Veta, M.. (2024). WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 254:25-37 Available from https://proceedings.mlr.press/v254/liu24a.html.

Related Material