ICL-SAM: Synergizing In-context Learning Model and SAM in Medical Image Segmentation

Jiesi Hu, Yang Shang, Yanwu Yang, Xutao Guo, Hanyang Peng, Ting Ma
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:641-656, 2024.

Abstract

Medical image segmentation, a field facing domain shifts due to diverse imaging modal- ities and biomedical domains, has made strides with the development of robust models. The In-Context Learning (ICL) model, like UniverSeg, demonstrates robustness to domain shifts with support image-label pairs in varied medical imaging segmentation tasks. How- ever, its performance is still unsatisfied. On the other hand, the Segment Anything Model (SAM) stands out as a powerful universal segmentation model. In this work, we intro- duce a novel methodology, ICL-SAM, that integrates the superior performance of SAM with the ICL model to create more effective segmentation models within the in-context learning paradigm. Our approach employs SAM to refine segmentation results from ICL model and leverages ICL model to generate prompts for SAM, eliminating the need for manual prompt provision. Additionally, we introduce a semantic confidence map gener- ation method into our framework to guide the prediction of both ICL model and SAM, thereby further enhancing segmentation accuracy. Our method has been extensively eval- uated across multiple medical imaging contexts, including fundus, MRI, and CT images, spanning five datasets. The results demonstrate significant performance improvements, particularly in settings with few support pairs, where our method can achieve over a 10% increase in the Dice coefficient compared to cutting edge ICL model. Our code will be publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-hu24a, title = {ICL-SAM: Synergizing In-context Learning Model and SAM in Medical Image Segmentation}, author = {Hu, Jiesi and Shang, Yang and Yang, Yanwu and Guo, Xutao and Peng, Hanyang and Ma, Ting}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {641--656}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/hu24a/hu24a.pdf}, url = {https://proceedings.mlr.press/v250/hu24a.html}, abstract = {Medical image segmentation, a field facing domain shifts due to diverse imaging modal- ities and biomedical domains, has made strides with the development of robust models. The In-Context Learning (ICL) model, like UniverSeg, demonstrates robustness to domain shifts with support image-label pairs in varied medical imaging segmentation tasks. How- ever, its performance is still unsatisfied. On the other hand, the Segment Anything Model (SAM) stands out as a powerful universal segmentation model. In this work, we intro- duce a novel methodology, ICL-SAM, that integrates the superior performance of SAM with the ICL model to create more effective segmentation models within the in-context learning paradigm. Our approach employs SAM to refine segmentation results from ICL model and leverages ICL model to generate prompts for SAM, eliminating the need for manual prompt provision. Additionally, we introduce a semantic confidence map gener- ation method into our framework to guide the prediction of both ICL model and SAM, thereby further enhancing segmentation accuracy. Our method has been extensively eval- uated across multiple medical imaging contexts, including fundus, MRI, and CT images, spanning five datasets. The results demonstrate significant performance improvements, particularly in settings with few support pairs, where our method can achieve over a 10% increase in the Dice coefficient compared to cutting edge ICL model. Our code will be publicly available.} }
Endnote
%0 Conference Paper %T ICL-SAM: Synergizing In-context Learning Model and SAM in Medical Image Segmentation %A Jiesi Hu %A Yang Shang %A Yanwu Yang %A Xutao Guo %A Hanyang Peng %A Ting Ma %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-hu24a %I PMLR %P 641--656 %U https://proceedings.mlr.press/v250/hu24a.html %V 250 %X Medical image segmentation, a field facing domain shifts due to diverse imaging modal- ities and biomedical domains, has made strides with the development of robust models. The In-Context Learning (ICL) model, like UniverSeg, demonstrates robustness to domain shifts with support image-label pairs in varied medical imaging segmentation tasks. How- ever, its performance is still unsatisfied. On the other hand, the Segment Anything Model (SAM) stands out as a powerful universal segmentation model. In this work, we intro- duce a novel methodology, ICL-SAM, that integrates the superior performance of SAM with the ICL model to create more effective segmentation models within the in-context learning paradigm. Our approach employs SAM to refine segmentation results from ICL model and leverages ICL model to generate prompts for SAM, eliminating the need for manual prompt provision. Additionally, we introduce a semantic confidence map gener- ation method into our framework to guide the prediction of both ICL model and SAM, thereby further enhancing segmentation accuracy. Our method has been extensively eval- uated across multiple medical imaging contexts, including fundus, MRI, and CT images, spanning five datasets. The results demonstrate significant performance improvements, particularly in settings with few support pairs, where our method can achieve over a 10% increase in the Dice coefficient compared to cutting edge ICL model. Our code will be publicly available.
APA
Hu, J., Shang, Y., Yang, Y., Guo, X., Peng, H. & Ma, T.. (2024). ICL-SAM: Synergizing In-context Learning Model and SAM in Medical Image Segmentation. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:641-656 Available from https://proceedings.mlr.press/v250/hu24a.html.

Related Material