XAI-MeD: Explainable Knowledge Guided Neuro-Symbolic Framework for Domain Generalization and Rare Class Detection in Medical Imaging

Midhat Urooj, Ayan Banerjee, Sandeep Gupta
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:265-274, 2026.

Abstract

Explainability, domain generalization, and rare-class reliability are critical challenges in medical AI, where deep models often fail under real-world distribution shifts and exhibit bias against infrequent clinical conditions. This paper introduces XAI-MeD, an explainable medical AI framework that integrates clinically accurate expert knowledge into deep learning through a unified neuro-symbolic architecture. XAI-MeD is designed to improve robustness under distribution shift, enhance rare-class sensitivity, and deliver transparent, clinically aligned interpretations. The framework encodes clinical expertise as logical connectives over atomic medical propositions, transforming them into machine-checkable, classspecific rules. Their diagnostic utility is quantified through weighted feature satisfaction scores, enabling a symbolic reasoning branch that complements neural predictions. A confidence-weighted fusion integrates symbolic and deep outputs, while a Hunt-inspired adaptive routing mechanism—guided by Entropy Imbalance Gain (EIG) and Rare-Class Gini mitigates class imbalance, high intra-class variability, and uncertainty. We evaluate XAI-MeD across diverse modalities, on four challenging tasks: (i) Seizure Onset Zone (SOZ) localization from rs-fMRI, (ii) Diabetic Retinopathy grading, across 6 multicenter datasets demonstrate substantial performance improvements, including 6% gains in cross- domain generalization and a 10% improved rare-class F1 score far outperforming state-of-the-art deep learning baselines. Ablation studies confirm that the clinically grounded symbolic components act as effective regularizers, ensuring robustness to distribution shifts. XAI-MeD thus provides a principled, clinically faithful, and interpretable approach to multimodal medical AI.

Cite this Paper


BibTeX
@InProceedings{pmlr-v317-urooj26a, title = {XAI-MeD: Explainable Knowledge Guided Neuro-Symbolic Framework for Domain Generalization and Rare Class Detection in Medical Imaging}, author = {Urooj, Midhat and Banerjee, Ayan and Gupta, Sandeep}, booktitle = {Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare}, pages = {265--274}, year = {2026}, editor = {Wu, Junde and Pan, Jiazhen and Zhu, Jiayuan and Luo, Luyang and Li, Yitong and Xu, Min and Jin, Yueming and Rueckert, Daniel}, volume = {317}, series = {Proceedings of Machine Learning Research}, month = {20--21 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v317/main/assets/urooj26a/urooj26a.pdf}, url = {https://proceedings.mlr.press/v317/urooj26a.html}, abstract = {Explainability, domain generalization, and rare-class reliability are critical challenges in medical AI, where deep models often fail under real-world distribution shifts and exhibit bias against infrequent clinical conditions. This paper introduces XAI-MeD, an explainable medical AI framework that integrates clinically accurate expert knowledge into deep learning through a unified neuro-symbolic architecture. XAI-MeD is designed to improve robustness under distribution shift, enhance rare-class sensitivity, and deliver transparent, clinically aligned interpretations. The framework encodes clinical expertise as logical connectives over atomic medical propositions, transforming them into machine-checkable, classspecific rules. Their diagnostic utility is quantified through weighted feature satisfaction scores, enabling a symbolic reasoning branch that complements neural predictions. A confidence-weighted fusion integrates symbolic and deep outputs, while a Hunt-inspired adaptive routing mechanism—guided by Entropy Imbalance Gain (EIG) and Rare-Class Gini mitigates class imbalance, high intra-class variability, and uncertainty. We evaluate XAI-MeD across diverse modalities, on four challenging tasks: (i) Seizure Onset Zone (SOZ) localization from rs-fMRI, (ii) Diabetic Retinopathy grading, across 6 multicenter datasets demonstrate substantial performance improvements, including 6% gains in cross- domain generalization and a 10% improved rare-class F1 score far outperforming state-of-the-art deep learning baselines. Ablation studies confirm that the clinically grounded symbolic components act as effective regularizers, ensuring robustness to distribution shifts. XAI-MeD thus provides a principled, clinically faithful, and interpretable approach to multimodal medical AI.} }
Endnote
%0 Conference Paper %T XAI-MeD: Explainable Knowledge Guided Neuro-Symbolic Framework for Domain Generalization and Rare Class Detection in Medical Imaging %A Midhat Urooj %A Ayan Banerjee %A Sandeep Gupta %B Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare %C Proceedings of Machine Learning Research %D 2026 %E Junde Wu %E Jiazhen Pan %E Jiayuan Zhu %E Luyang Luo %E Yitong Li %E Min Xu %E Yueming Jin %E Daniel Rueckert %F pmlr-v317-urooj26a %I PMLR %P 265--274 %U https://proceedings.mlr.press/v317/urooj26a.html %V 317 %X Explainability, domain generalization, and rare-class reliability are critical challenges in medical AI, where deep models often fail under real-world distribution shifts and exhibit bias against infrequent clinical conditions. This paper introduces XAI-MeD, an explainable medical AI framework that integrates clinically accurate expert knowledge into deep learning through a unified neuro-symbolic architecture. XAI-MeD is designed to improve robustness under distribution shift, enhance rare-class sensitivity, and deliver transparent, clinically aligned interpretations. The framework encodes clinical expertise as logical connectives over atomic medical propositions, transforming them into machine-checkable, classspecific rules. Their diagnostic utility is quantified through weighted feature satisfaction scores, enabling a symbolic reasoning branch that complements neural predictions. A confidence-weighted fusion integrates symbolic and deep outputs, while a Hunt-inspired adaptive routing mechanism—guided by Entropy Imbalance Gain (EIG) and Rare-Class Gini mitigates class imbalance, high intra-class variability, and uncertainty. We evaluate XAI-MeD across diverse modalities, on four challenging tasks: (i) Seizure Onset Zone (SOZ) localization from rs-fMRI, (ii) Diabetic Retinopathy grading, across 6 multicenter datasets demonstrate substantial performance improvements, including 6% gains in cross- domain generalization and a 10% improved rare-class F1 score far outperforming state-of-the-art deep learning baselines. Ablation studies confirm that the clinically grounded symbolic components act as effective regularizers, ensuring robustness to distribution shifts. XAI-MeD thus provides a principled, clinically faithful, and interpretable approach to multimodal medical AI.
APA
Urooj, M., Banerjee, A. & Gupta, S.. (2026). XAI-MeD: Explainable Knowledge Guided Neuro-Symbolic Framework for Domain Generalization and Rare Class Detection in Medical Imaging. Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, in Proceedings of Machine Learning Research 317:265-274 Available from https://proceedings.mlr.press/v317/urooj26a.html.

Related Material