NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks

Bjørn Leth Møller, Sepideh Amiri, Christian Igel, Kristoffer Knutsen Wickstrøm, Robert Jenssen, Matthias Keicher, Mohammad Farid Azampour, Nassir Navab, Bulat Ibragimov
Proceedings of the 6th Northern Lights Deep Learning Conference (NLDL), PMLR 265:184-192, 2025.

Abstract

A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed. NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification. In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T

Cite this Paper


BibTeX
@InProceedings{pmlr-v265-moller25a, title = {{NEM}t: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks}, author = {M{\o}ller, Bj{\o}rn Leth and Amiri, Sepideh and Igel, Christian and Wickstr{\o}m, Kristoffer Knutsen and Jenssen, Robert and Keicher, Matthias and Azampour, Mohammad Farid and Navab, Nassir and Ibragimov, Bulat}, booktitle = {Proceedings of the 6th Northern Lights Deep Learning Conference (NLDL)}, pages = {184--192}, year = {2025}, editor = {Lutchyn, Tetiana and Ramírez Rivera, Adín and Ricaud, Benjamin}, volume = {265}, series = {Proceedings of Machine Learning Research}, month = {07--09 Jan}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v265/main/assets/moller25a/moller25a.pdf}, url = {https://proceedings.mlr.press/v265/moller25a.html}, abstract = {A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed. NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification. In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T} }
Endnote
%0 Conference Paper %T NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks %A Bjørn Leth Møller %A Sepideh Amiri %A Christian Igel %A Kristoffer Knutsen Wickstrøm %A Robert Jenssen %A Matthias Keicher %A Mohammad Farid Azampour %A Nassir Navab %A Bulat Ibragimov %B Proceedings of the 6th Northern Lights Deep Learning Conference (NLDL) %C Proceedings of Machine Learning Research %D 2025 %E Tetiana Lutchyn %E Adín Ramírez Rivera %E Benjamin Ricaud %F pmlr-v265-moller25a %I PMLR %P 184--192 %U https://proceedings.mlr.press/v265/moller25a.html %V 265 %X A fundamental barrier to the adoption of AI systems in clinical practice is the insufficient transparency of AI decision-making. The field of Explainable Artificial Intelligence (XAI) seeks to provide human-interpretable explanations for a given AI model. The recently proposed Neural Explanation Mask (NEM) framework is the first XAI method to explain learned representations with high accuracy at real-time speed. NEM transforms a given differentiable model into a self-explaining system by augmenting it with a neural network-based explanation module. This module is trained in an unsupervised manner to output occlusion-based explanations for the original model. However, the current framework does not consider labels associated with the inputs. This makes it unsuitable for many important tasks in the medical domain that require explanations specific to particular output dimensions, such as pathology discovery, disease severity regression, and multi-label data classification. In this work, we address this issue by introducing a loss function for training explanation modules incorporating labels. It steers explanations toward target labels alongside an integrated smoothing operator, which reduces artifacts in the explanation masks. We validate the resulting Neural Explanation Masks with target labels (NEMt) framework on public databases of lung radiographs and skin images. The obtained results are superior to the state-of-the-art XAI methods in terms of explanation relevancy mass, complexity, and sparseness. Moreover, the explanation generation is several hundred times faster, allowing for real-time clinical applications. The code is publicly available at https://github.com/baerminator/NEM_T
APA
Møller, B.L., Amiri, S., Igel, C., Wickstrøm, K.K., Jenssen, R., Keicher, M., Azampour, M.F., Navab, N. & Ibragimov, B.. (2025). NEMt: Fast Targeted Explanations for Medical Image Models via Neural Explanation Masks. Proceedings of the 6th Northern Lights Deep Learning Conference (NLDL), in Proceedings of Machine Learning Research 265:184-192 Available from https://proceedings.mlr.press/v265/moller25a.html.

Related Material