Evaluating Uncertainty-Based Deep Learning Explanations for Prostate Lesion Detection

Christopher M Trombley, Mehmet Akif Gulum, Merve Ozen, Enes Esen, Melih Aksamoglu, Mehmed Kantardzic
Proceedings of the 7th Machine Learning for Healthcare Conference, PMLR 182:874-891, 2022.

Abstract

Deep learning has demonstrated impressive accuracy for prostate lesion identification and classification. Deep learning algorithms are considered black-box methods therefore they require explanation methods to gain insight into the model’s classification. For high stakes tasks such as medical diagnosis, it is important that explanation methods are able to estimate explanation uncertainty. Recently, there have been various methods proposed for providing uncertainty-based explanations. However, the clinical effectiveness of uncertainty-based explanation methods and what radiologists deem explainable within this context is still largely unknown. To that end, this pilot study investigates the effectiveness of uncertainty-based prostate lesion detection explanations. It also attempts to gain insight into what radiologists consider explainable. An experiment was conducted with a cohort of radiologists to determine if uncertainty-based explanation methods improve prostate lesion detection. Additionally, a qualitative assessment of each method was conducted to gain insight into what characteristics make an explanation method suitable for radiology end use. It was found that uncertainty-based explanation methods increase lesion detection performance by up to 20%. It was also found that perceived explanation quality is related to actual explanation quality. This pilot study demonstrates the potential use of explanation methods for radiology end use and gleans insight into what radiologists deem explainable.

Cite this Paper


BibTeX
@InProceedings{pmlr-v182-trombley22a, title = {Evaluating Uncertainty-Based Deep Learning Explanations for Prostate Lesion Detection}, author = {Trombley, Christopher M and Gulum, Mehmet Akif and Ozen, Merve and Esen, Enes and Aksamoglu, Melih and Kantardzic, Mehmed}, booktitle = {Proceedings of the 7th Machine Learning for Healthcare Conference}, pages = {874--891}, year = {2022}, editor = {Lipton, Zachary and Ranganath, Rajesh and Sendak, Mark and Sjoding, Michael and Yeung, Serena}, volume = {182}, series = {Proceedings of Machine Learning Research}, month = {05--06 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v182/trombley22a/trombley22a.pdf}, url = {https://proceedings.mlr.press/v182/trombley22a.html}, abstract = {Deep learning has demonstrated impressive accuracy for prostate lesion identification and classification. Deep learning algorithms are considered black-box methods therefore they require explanation methods to gain insight into the model’s classification. For high stakes tasks such as medical diagnosis, it is important that explanation methods are able to estimate explanation uncertainty. Recently, there have been various methods proposed for providing uncertainty-based explanations. However, the clinical effectiveness of uncertainty-based explanation methods and what radiologists deem explainable within this context is still largely unknown. To that end, this pilot study investigates the effectiveness of uncertainty-based prostate lesion detection explanations. It also attempts to gain insight into what radiologists consider explainable. An experiment was conducted with a cohort of radiologists to determine if uncertainty-based explanation methods improve prostate lesion detection. Additionally, a qualitative assessment of each method was conducted to gain insight into what characteristics make an explanation method suitable for radiology end use. It was found that uncertainty-based explanation methods increase lesion detection performance by up to 20%. It was also found that perceived explanation quality is related to actual explanation quality. This pilot study demonstrates the potential use of explanation methods for radiology end use and gleans insight into what radiologists deem explainable.} }
Endnote
%0 Conference Paper %T Evaluating Uncertainty-Based Deep Learning Explanations for Prostate Lesion Detection %A Christopher M Trombley %A Mehmet Akif Gulum %A Merve Ozen %A Enes Esen %A Melih Aksamoglu %A Mehmed Kantardzic %B Proceedings of the 7th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2022 %E Zachary Lipton %E Rajesh Ranganath %E Mark Sendak %E Michael Sjoding %E Serena Yeung %F pmlr-v182-trombley22a %I PMLR %P 874--891 %U https://proceedings.mlr.press/v182/trombley22a.html %V 182 %X Deep learning has demonstrated impressive accuracy for prostate lesion identification and classification. Deep learning algorithms are considered black-box methods therefore they require explanation methods to gain insight into the model’s classification. For high stakes tasks such as medical diagnosis, it is important that explanation methods are able to estimate explanation uncertainty. Recently, there have been various methods proposed for providing uncertainty-based explanations. However, the clinical effectiveness of uncertainty-based explanation methods and what radiologists deem explainable within this context is still largely unknown. To that end, this pilot study investigates the effectiveness of uncertainty-based prostate lesion detection explanations. It also attempts to gain insight into what radiologists consider explainable. An experiment was conducted with a cohort of radiologists to determine if uncertainty-based explanation methods improve prostate lesion detection. Additionally, a qualitative assessment of each method was conducted to gain insight into what characteristics make an explanation method suitable for radiology end use. It was found that uncertainty-based explanation methods increase lesion detection performance by up to 20%. It was also found that perceived explanation quality is related to actual explanation quality. This pilot study demonstrates the potential use of explanation methods for radiology end use and gleans insight into what radiologists deem explainable.
APA
Trombley, C.M., Gulum, M.A., Ozen, M., Esen, E., Aksamoglu, M. & Kantardzic, M.. (2022). Evaluating Uncertainty-Based Deep Learning Explanations for Prostate Lesion Detection. Proceedings of the 7th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 182:874-891 Available from https://proceedings.mlr.press/v182/trombley22a.html.

Related Material