Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?

Zhi Chen, Sarah Tan, Urszula Chajewska, Cynthia Rudin, Rich Caruna
Proceedings of the Conference on Health, Inference, and Learning, PMLR 209:86-99, 2023.

Abstract

Missing values are a fundamental problem in data science. Many datasets have missing values that must be properly handled because the way missing values are treated can have large impact on the resulting machine learning model. In medical applications, the consequences may affect healthcare decisions. There are many methods in the literature for dealing with missing values, including state-of-the-art methods which often depend on black-box models for imputation. In this work, we show how recent advances in interpretable machine learning provide a new perspective for understanding and tackling the missing value problem. We propose methods based on high-accuracy glass-box Explainable Boosting Machines (EBMs) that can help users (1) gain new insights on missingness mechanisms and better understand the causes of missingness, and (2) detect – or even alleviate – potential risks introduced by imputation algorithms. Experiments on real-world medical datasets illustrate the effectiveness of the proposed methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v209-chen23a, title = {Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?}, author = {Chen, Zhi and Tan, Sarah and Chajewska, Urszula and Rudin, Cynthia and Caruna, Rich}, booktitle = {Proceedings of the Conference on Health, Inference, and Learning}, pages = {86--99}, year = {2023}, editor = {Mortazavi, Bobak J. and Sarker, Tasmie and Beam, Andrew and Ho, Joyce C.}, volume = {209}, series = {Proceedings of Machine Learning Research}, month = {22 Jun--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v209/chen23a/chen23a.pdf}, url = {https://proceedings.mlr.press/v209/chen23a.html}, abstract = {Missing values are a fundamental problem in data science. Many datasets have missing values that must be properly handled because the way missing values are treated can have large impact on the resulting machine learning model. In medical applications, the consequences may affect healthcare decisions. There are many methods in the literature for dealing with missing values, including state-of-the-art methods which often depend on black-box models for imputation. In this work, we show how recent advances in interpretable machine learning provide a new perspective for understanding and tackling the missing value problem. We propose methods based on high-accuracy glass-box Explainable Boosting Machines (EBMs) that can help users (1) gain new insights on missingness mechanisms and better understand the causes of missingness, and (2) detect – or even alleviate – potential risks introduced by imputation algorithms. Experiments on real-world medical datasets illustrate the effectiveness of the proposed methods.} }
Endnote
%0 Conference Paper %T Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help? %A Zhi Chen %A Sarah Tan %A Urszula Chajewska %A Cynthia Rudin %A Rich Caruna %B Proceedings of the Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2023 %E Bobak J. Mortazavi %E Tasmie Sarker %E Andrew Beam %E Joyce C. Ho %F pmlr-v209-chen23a %I PMLR %P 86--99 %U https://proceedings.mlr.press/v209/chen23a.html %V 209 %X Missing values are a fundamental problem in data science. Many datasets have missing values that must be properly handled because the way missing values are treated can have large impact on the resulting machine learning model. In medical applications, the consequences may affect healthcare decisions. There are many methods in the literature for dealing with missing values, including state-of-the-art methods which often depend on black-box models for imputation. In this work, we show how recent advances in interpretable machine learning provide a new perspective for understanding and tackling the missing value problem. We propose methods based on high-accuracy glass-box Explainable Boosting Machines (EBMs) that can help users (1) gain new insights on missingness mechanisms and better understand the causes of missingness, and (2) detect – or even alleviate – potential risks introduced by imputation algorithms. Experiments on real-world medical datasets illustrate the effectiveness of the proposed methods.
APA
Chen, Z., Tan, S., Chajewska, U., Rudin, C. & Caruna, R.. (2023). Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?. Proceedings of the Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 209:86-99 Available from https://proceedings.mlr.press/v209/chen23a.html.

Related Material