SegX: Improving Interpretability of Clinical Image Diagnosis with Segmentation-based Enhancement

Yuhao Zhang, Mingcheng Zhu, Zhiyao Luo
Proceedings of The First AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 281:100-108, 2025.

Abstract

Deep learning-based medical image analysis faces a significant barrier due to the lack of interpretability. Conventional explainable AI (XAI) techniques, such as Grad-CAM and SHAP, often highlight regions outside clinical interests. To address this issue, we propose Segmentation-based Explanation (SegX), a plug-and-play approach that enhances interpretability by aligning the model’s explanation map with clinically relevant areas leveraging the power of segmentation models. Furthermore, we introduce Segmentation-based Uncertainty Assessment (SegU), a method to quantify the uncertainty of the prediction model by measuring the ’distance’ between interpretation maps and clinically significant regions. Our experiments on dermoscopic and chest X-ray datasets show that SegX improves interpretability consistently across mortalities, and the certainty score provided by SegU reliably reflects the correctness of the model’s predictions. Our approach offers a model-agnostic enhancement to medical image diagnosis towards reliable and interpretable AI in clinical decision-making. The implementation is publicly available at https://github.com/JasonZuu/SegX.

Cite this Paper


BibTeX
@InProceedings{pmlr-v281-zhang25c, title = {SegX: Improving Interpretability of Clinical Image Diagnosis with Segmentation-based Enhancement}, author = {Zhang, Yuhao and Zhu, Mingcheng and Luo, Zhiyao}, booktitle = {Proceedings of The First AAAI Bridge Program on AI for Medicine and Healthcare}, pages = {100--108}, year = {2025}, editor = {Wu, Junde and Zhu, Jiayuan and Xu, Min and Jin, Yueming}, volume = {281}, series = {Proceedings of Machine Learning Research}, month = {25 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v281/main/assets/zhang25c/zhang25c.pdf}, url = {https://proceedings.mlr.press/v281/zhang25c.html}, abstract = {Deep learning-based medical image analysis faces a significant barrier due to the lack of interpretability. Conventional explainable AI (XAI) techniques, such as Grad-CAM and SHAP, often highlight regions outside clinical interests. To address this issue, we propose Segmentation-based Explanation (SegX), a plug-and-play approach that enhances interpretability by aligning the model’s explanation map with clinically relevant areas leveraging the power of segmentation models. Furthermore, we introduce Segmentation-based Uncertainty Assessment (SegU), a method to quantify the uncertainty of the prediction model by measuring the ’distance’ between interpretation maps and clinically significant regions. Our experiments on dermoscopic and chest X-ray datasets show that SegX improves interpretability consistently across mortalities, and the certainty score provided by SegU reliably reflects the correctness of the model’s predictions. Our approach offers a model-agnostic enhancement to medical image diagnosis towards reliable and interpretable AI in clinical decision-making. The implementation is publicly available at https://github.com/JasonZuu/SegX.} }
Endnote
%0 Conference Paper %T SegX: Improving Interpretability of Clinical Image Diagnosis with Segmentation-based Enhancement %A Yuhao Zhang %A Mingcheng Zhu %A Zhiyao Luo %B Proceedings of The First AAAI Bridge Program on AI for Medicine and Healthcare %C Proceedings of Machine Learning Research %D 2025 %E Junde Wu %E Jiayuan Zhu %E Min Xu %E Yueming Jin %F pmlr-v281-zhang25c %I PMLR %P 100--108 %U https://proceedings.mlr.press/v281/zhang25c.html %V 281 %X Deep learning-based medical image analysis faces a significant barrier due to the lack of interpretability. Conventional explainable AI (XAI) techniques, such as Grad-CAM and SHAP, often highlight regions outside clinical interests. To address this issue, we propose Segmentation-based Explanation (SegX), a plug-and-play approach that enhances interpretability by aligning the model’s explanation map with clinically relevant areas leveraging the power of segmentation models. Furthermore, we introduce Segmentation-based Uncertainty Assessment (SegU), a method to quantify the uncertainty of the prediction model by measuring the ’distance’ between interpretation maps and clinically significant regions. Our experiments on dermoscopic and chest X-ray datasets show that SegX improves interpretability consistently across mortalities, and the certainty score provided by SegU reliably reflects the correctness of the model’s predictions. Our approach offers a model-agnostic enhancement to medical image diagnosis towards reliable and interpretable AI in clinical decision-making. The implementation is publicly available at https://github.com/JasonZuu/SegX.
APA
Zhang, Y., Zhu, M. & Luo, Z.. (2025). SegX: Improving Interpretability of Clinical Image Diagnosis with Segmentation-based Enhancement. Proceedings of The First AAAI Bridge Program on AI for Medicine and Healthcare, in Proceedings of Machine Learning Research 281:100-108 Available from https://proceedings.mlr.press/v281/zhang25c.html.

Related Material