[edit]
SegX: Improving Interpretability of Clinical Image Diagnosis with Segmentation-based Enhancement
Proceedings of The First AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 281:100-108, 2025.
Abstract
Deep learning-based medical image analysis faces a significant barrier due to the lack of interpretability. Conventional explainable AI (XAI) techniques, such as Grad-CAM and SHAP, often highlight regions outside clinical interests. To address this issue, we propose Segmentation-based Explanation (SegX), a plug-and-play approach that enhances interpretability by aligning the model’s explanation map with clinically relevant areas leveraging the power of segmentation models. Furthermore, we introduce Segmentation-based Uncertainty Assessment (SegU), a method to quantify the uncertainty of the prediction model by measuring the ’distance’ between interpretation maps and clinically significant regions. Our experiments on dermoscopic and chest X-ray datasets show that SegX improves interpretability consistently across mortalities, and the certainty score provided by SegU reliably reflects the correctness of the model’s predictions. Our approach offers a model-agnostic enhancement to medical image diagnosis towards reliable and interpretable AI in clinical decision-making. The implementation is publicly available at https://github.com/JasonZuu/SegX.