Training-free Prompt Placement by Propagation for SAM Predictions in Bone CT Scans

Caroline Magg, Lukas P.E. Verweij, Maaike A. ter Wee, George S. Buijs, Johannes G.G. Dobbe, Geert J. Streekstra, Leendert Blankevoort, Clara I. Sánchez
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:964-985, 2024.

Abstract

The Segment Anything Model (SAM) is an interactive foundation segmentation model, showing impressive results for 2D natural images using prompts such as points and boxes. Transferring these results to medical image segmentation is challenging due to the 3D nature of medical images and the high demand of manual interaction. As a 2D architecture, SAM is applied slice-per-slice to a 3D medical scan. This hinders the application of SAM for volumetric medical scans since at least one prompt per class for each single slice is needed. In our work, the applicability is improve by reducing the number of necessary user-generated prompts. We introduce and evaluate multiple training-free strategies to automatically place box prompts in bone CT volumes, given only one initial box prompt per class. The average performance of our methods ranges from 54.22% Dice to 88.26% Dice. At the same time, the number of annotated pixels is reduced significantly from a few millions to two pixels per class. These promising results underline the potential of foundation models in medical image segmentation, paving the way for annotation-efficient, general approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-magg24a, title = {Training-free Prompt Placement by Propagation for SAM Predictions in Bone CT Scans}, author = {Magg, Caroline and Verweij, Lukas P.E. and ter Wee, Maaike A. and Buijs, George S. and Dobbe, Johannes G.G. and Streekstra, Geert J. and Blankevoort, Leendert and S\'anchez, Clara I.}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {964--985}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/magg24a/magg24a.pdf}, url = {https://proceedings.mlr.press/v250/magg24a.html}, abstract = {The Segment Anything Model (SAM) is an interactive foundation segmentation model, showing impressive results for 2D natural images using prompts such as points and boxes. Transferring these results to medical image segmentation is challenging due to the 3D nature of medical images and the high demand of manual interaction. As a 2D architecture, SAM is applied slice-per-slice to a 3D medical scan. This hinders the application of SAM for volumetric medical scans since at least one prompt per class for each single slice is needed. In our work, the applicability is improve by reducing the number of necessary user-generated prompts. We introduce and evaluate multiple training-free strategies to automatically place box prompts in bone CT volumes, given only one initial box prompt per class. The average performance of our methods ranges from 54.22% Dice to 88.26% Dice. At the same time, the number of annotated pixels is reduced significantly from a few millions to two pixels per class. These promising results underline the potential of foundation models in medical image segmentation, paving the way for annotation-efficient, general approaches.} }
Endnote
%0 Conference Paper %T Training-free Prompt Placement by Propagation for SAM Predictions in Bone CT Scans %A Caroline Magg %A Lukas P.E. Verweij %A Maaike A. ter Wee %A George S. Buijs %A Johannes G.G. Dobbe %A Geert J. Streekstra %A Leendert Blankevoort %A Clara I. Sánchez %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-magg24a %I PMLR %P 964--985 %U https://proceedings.mlr.press/v250/magg24a.html %V 250 %X The Segment Anything Model (SAM) is an interactive foundation segmentation model, showing impressive results for 2D natural images using prompts such as points and boxes. Transferring these results to medical image segmentation is challenging due to the 3D nature of medical images and the high demand of manual interaction. As a 2D architecture, SAM is applied slice-per-slice to a 3D medical scan. This hinders the application of SAM for volumetric medical scans since at least one prompt per class for each single slice is needed. In our work, the applicability is improve by reducing the number of necessary user-generated prompts. We introduce and evaluate multiple training-free strategies to automatically place box prompts in bone CT volumes, given only one initial box prompt per class. The average performance of our methods ranges from 54.22% Dice to 88.26% Dice. At the same time, the number of annotated pixels is reduced significantly from a few millions to two pixels per class. These promising results underline the potential of foundation models in medical image segmentation, paving the way for annotation-efficient, general approaches.
APA
Magg, C., Verweij, L.P., ter Wee, M.A., Buijs, G.S., Dobbe, J.G., Streekstra, G.J., Blankevoort, L. & Sánchez, C.I.. (2024). Training-free Prompt Placement by Propagation for SAM Predictions in Bone CT Scans. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:964-985 Available from https://proceedings.mlr.press/v250/magg24a.html.

Related Material