What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use

Sana Tonekaboni, Shalmali Joshi, Melissa D. McCradden, Anna Goldenberg
Proceedings of the 4th Machine Learning for Healthcare Conference, PMLR 106:359-380, 2019.

Abstract

Translating machine learning (ML) models effectively to clinical practice requires establishing clinicians’ trust. Explainability, or the ability of an ML model to justify its outcomes and assist clinicians in rationalizing the model prediction, has been generally understood to be critical to establishing trust. However, the eld suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyze building trust in ML models, we surveyed clinicians from two distinct acute care specialties (Intenstive Care Unit and Emergency Department). We use their feedback to characterize when explainability helps to improve clinicians’ trust in ML models. We further identify the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice. Finally, we discern concrete metrics for rigorous evaluation of clinical explainability methods. By integrating perceptions of explainability between clinicians and ML researchers we hope to facilitate the endorsement and broader adoption and sustained use of ML systems in healthcare.

Cite this Paper


BibTeX
@InProceedings{pmlr-v106-tonekaboni19a, title = {What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use}, author = {Tonekaboni, Sana and Joshi, Shalmali and McCradden, Melissa D. and Goldenberg, Anna}, booktitle = {Proceedings of the 4th Machine Learning for Healthcare Conference}, pages = {359--380}, year = {2019}, editor = {Doshi-Velez, Finale and Fackler, Jim and Jung, Ken and Kale, David and Ranganath, Rajesh and Wallace, Byron and Wiens, Jenna}, volume = {106}, series = {Proceedings of Machine Learning Research}, month = {09--10 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v106/tonekaboni19a/tonekaboni19a.pdf}, url = {https://proceedings.mlr.press/v106/tonekaboni19a.html}, abstract = {Translating machine learning (ML) models effectively to clinical practice requires establishing clinicians’ trust. Explainability, or the ability of an ML model to justify its outcomes and assist clinicians in rationalizing the model prediction, has been generally understood to be critical to establishing trust. However, the eld suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyze building trust in ML models, we surveyed clinicians from two distinct acute care specialties (Intenstive Care Unit and Emergency Department). We use their feedback to characterize when explainability helps to improve clinicians’ trust in ML models. We further identify the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice. Finally, we discern concrete metrics for rigorous evaluation of clinical explainability methods. By integrating perceptions of explainability between clinicians and ML researchers we hope to facilitate the endorsement and broader adoption and sustained use of ML systems in healthcare.} }
Endnote
%0 Conference Paper %T What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use %A Sana Tonekaboni %A Shalmali Joshi %A Melissa D. McCradden %A Anna Goldenberg %B Proceedings of the 4th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2019 %E Finale Doshi-Velez %E Jim Fackler %E Ken Jung %E David Kale %E Rajesh Ranganath %E Byron Wallace %E Jenna Wiens %F pmlr-v106-tonekaboni19a %I PMLR %P 359--380 %U https://proceedings.mlr.press/v106/tonekaboni19a.html %V 106 %X Translating machine learning (ML) models effectively to clinical practice requires establishing clinicians’ trust. Explainability, or the ability of an ML model to justify its outcomes and assist clinicians in rationalizing the model prediction, has been generally understood to be critical to establishing trust. However, the eld suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyze building trust in ML models, we surveyed clinicians from two distinct acute care specialties (Intenstive Care Unit and Emergency Department). We use their feedback to characterize when explainability helps to improve clinicians’ trust in ML models. We further identify the classes of explanations that clinicians identified as most relevant and crucial for effective translation to clinical practice. Finally, we discern concrete metrics for rigorous evaluation of clinical explainability methods. By integrating perceptions of explainability between clinicians and ML researchers we hope to facilitate the endorsement and broader adoption and sustained use of ML systems in healthcare.
APA
Tonekaboni, S., Joshi, S., McCradden, M.D. & Goldenberg, A.. (2019). What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use. Proceedings of the 4th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 106:359-380 Available from https://proceedings.mlr.press/v106/tonekaboni19a.html.

Related Material