Explaining Negative Classifications of AI Models in Tumor Diagnosis

David A. Kelly, Hana Chockler, Nathan Blake
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2069-2081, 2025.

Abstract

Using AI models in healthcare is gaining popularity. To improve clinician confidence in the results of automated triage and to provide further information about the suggested diagnosis, an explanation produced by a separate post-hoc explainability tool often accompanies the classification of an AI model. If no abnormalities are detected, however, it is not clear what an explanation should be. A human clinician might be able to describe certain salient features of tumors that are not in scan, but existing Explainable AI tools cannot do that, as they cannot point to features that are absent from the input. In this paper, we present a definition of and algorithm for providing explanations of absence; that is, explanations of negative classifications in the context of healthcare AI. Our approach is rooted in the concept of explanations in actual causality. It uses the model as a black-box and is hence portable and works with proprietary models. Moreover, the computation is done in the preprocessing stage, based on the model and the dataset. During the execution, the algorithm only projects the precomputed explanation template on the current image. We implemented this approach in a tool, nito, and trialed it on a number of medical datasets to demonstrate its utility on the classification of solid tumors. We discuss the differences between the theoretical approach and the implementation in the domain of classifying solid tumors and address the additional complications posed by this domain. Finally, we discuss the assumptions we make in our algorithm and its possible extensions to explanations of absence for general image classifiers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-kelly25a, title = {Explaining Negative Classifications of AI Models in Tumor Diagnosis}, author = {Kelly, David A. and Chockler, Hana and Blake, Nathan}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2069--2081}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/kelly25a/kelly25a.pdf}, url = {https://proceedings.mlr.press/v286/kelly25a.html}, abstract = {Using AI models in healthcare is gaining popularity. To improve clinician confidence in the results of automated triage and to provide further information about the suggested diagnosis, an explanation produced by a separate post-hoc explainability tool often accompanies the classification of an AI model. If no abnormalities are detected, however, it is not clear what an explanation should be. A human clinician might be able to describe certain salient features of tumors that are not in scan, but existing Explainable AI tools cannot do that, as they cannot point to features that are absent from the input. In this paper, we present a definition of and algorithm for providing explanations of absence; that is, explanations of negative classifications in the context of healthcare AI. Our approach is rooted in the concept of explanations in actual causality. It uses the model as a black-box and is hence portable and works with proprietary models. Moreover, the computation is done in the preprocessing stage, based on the model and the dataset. During the execution, the algorithm only projects the precomputed explanation template on the current image. We implemented this approach in a tool, nito, and trialed it on a number of medical datasets to demonstrate its utility on the classification of solid tumors. We discuss the differences between the theoretical approach and the implementation in the domain of classifying solid tumors and address the additional complications posed by this domain. Finally, we discuss the assumptions we make in our algorithm and its possible extensions to explanations of absence for general image classifiers.} }
Endnote
%0 Conference Paper %T Explaining Negative Classifications of AI Models in Tumor Diagnosis %A David A. Kelly %A Hana Chockler %A Nathan Blake %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-kelly25a %I PMLR %P 2069--2081 %U https://proceedings.mlr.press/v286/kelly25a.html %V 286 %X Using AI models in healthcare is gaining popularity. To improve clinician confidence in the results of automated triage and to provide further information about the suggested diagnosis, an explanation produced by a separate post-hoc explainability tool often accompanies the classification of an AI model. If no abnormalities are detected, however, it is not clear what an explanation should be. A human clinician might be able to describe certain salient features of tumors that are not in scan, but existing Explainable AI tools cannot do that, as they cannot point to features that are absent from the input. In this paper, we present a definition of and algorithm for providing explanations of absence; that is, explanations of negative classifications in the context of healthcare AI. Our approach is rooted in the concept of explanations in actual causality. It uses the model as a black-box and is hence portable and works with proprietary models. Moreover, the computation is done in the preprocessing stage, based on the model and the dataset. During the execution, the algorithm only projects the precomputed explanation template on the current image. We implemented this approach in a tool, nito, and trialed it on a number of medical datasets to demonstrate its utility on the classification of solid tumors. We discuss the differences between the theoretical approach and the implementation in the domain of classifying solid tumors and address the additional complications posed by this domain. Finally, we discuss the assumptions we make in our algorithm and its possible extensions to explanations of absence for general image classifiers.
APA
Kelly, D.A., Chockler, H. & Blake, N.. (2025). Explaining Negative Classifications of AI Models in Tumor Diagnosis. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2069-2081 Available from https://proceedings.mlr.press/v286/kelly25a.html.

Related Material