Development and Validation of ML-DQA – a Machine Learning Data Quality Assurance Framework for Healthcare

Mark Sendak, Gaurav Sirdeshmukh, Timothy Ochoa, Hayley Premo, Linda Tang, Kira Niederhoffer, Sarah Reed, Kaivalya Deshpande, Emily Sterrett, Melissa Bauer, Laurie Snyder, Afreen Shariff, David Whellan, Jeffrey Riggio, David Gaieski, Kristin Corey, Megan Richards, Michael Gao, Marshall Nichols, Bradley Heintze, William Knechtle, William Ratliff, Suresh Balu
Proceedings of the 7th Machine Learning for Healthcare Conference, PMLR 182:741-759, 2022.

Abstract

The approaches by which the machine learning and clinical research communities utilize real world data (RWD), including data captured in the electronic health record (EHR), vary dramatically. While clinical researchers cautiously use RWD for clinical investigations, ML for healthcare teams consume public datasets with minimal scrutiny to develop new algorithms. This study bridges this gap by developing and validating ML-DQA, a data quality assurance framework grounded in RWD best practices. The ML-DQA framework is applied to five ML projects across two geographies, different medical conditions, and different cohorts. A total of 2,999 quality checks and 24 quality reports were generated on RWD gathered on 247,536 patients across the five projects. Five generalizable practices emerge: all projects used a similar method to group redundant data element representations; all projects used automated utilities to build diagnosis and medication data elements; all projects used a common library of rules-based transformations; all projects used a unified approach to assign data quality checks to data elements; and all projects used a similar approach to clinical adjudication. An average of 5.8 individuals, including clinicians, data scientists, and trainees, were involved in implementing ML-DQA for each project and an average of 23.4 data elements per project were either transformed or removed in response to ML-DQA. This study demonstrates the importance role of ML-DQA in healthcare projects and provides teams a framework to conduct these essential activities.

Cite this Paper


BibTeX
@InProceedings{pmlr-v182-sendak22a, title = {Development and Validation of ML-DQA – a Machine Learning Data Quality Assurance Framework for Healthcare}, author = {Sendak, Mark and Sirdeshmukh, Gaurav and Ochoa, Timothy and Premo, Hayley and Tang, Linda and Niederhoffer, Kira and Reed, Sarah and Deshpande, Kaivalya and Sterrett, Emily and Bauer, Melissa and Snyder, Laurie and Shariff, Afreen and Whellan, David and Riggio, Jeffrey and Gaieski, David and Corey, Kristin and Richards, Megan and Gao, Michael and Nichols, Marshall and Heintze, Bradley and Knechtle, William and Ratliff, William and Balu, Suresh}, booktitle = {Proceedings of the 7th Machine Learning for Healthcare Conference}, pages = {741--759}, year = {2022}, editor = {Lipton, Zachary and Ranganath, Rajesh and Sendak, Mark and Sjoding, Michael and Yeung, Serena}, volume = {182}, series = {Proceedings of Machine Learning Research}, month = {05--06 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v182/sendak22a/sendak22a.pdf}, url = {https://proceedings.mlr.press/v182/sendak22a.html}, abstract = {The approaches by which the machine learning and clinical research communities utilize real world data (RWD), including data captured in the electronic health record (EHR), vary dramatically. While clinical researchers cautiously use RWD for clinical investigations, ML for healthcare teams consume public datasets with minimal scrutiny to develop new algorithms. This study bridges this gap by developing and validating ML-DQA, a data quality assurance framework grounded in RWD best practices. The ML-DQA framework is applied to five ML projects across two geographies, different medical conditions, and different cohorts. A total of 2,999 quality checks and 24 quality reports were generated on RWD gathered on 247,536 patients across the five projects. Five generalizable practices emerge: all projects used a similar method to group redundant data element representations; all projects used automated utilities to build diagnosis and medication data elements; all projects used a common library of rules-based transformations; all projects used a unified approach to assign data quality checks to data elements; and all projects used a similar approach to clinical adjudication. An average of 5.8 individuals, including clinicians, data scientists, and trainees, were involved in implementing ML-DQA for each project and an average of 23.4 data elements per project were either transformed or removed in response to ML-DQA. This study demonstrates the importance role of ML-DQA in healthcare projects and provides teams a framework to conduct these essential activities.} }
Endnote
%0 Conference Paper %T Development and Validation of ML-DQA – a Machine Learning Data Quality Assurance Framework for Healthcare %A Mark Sendak %A Gaurav Sirdeshmukh %A Timothy Ochoa %A Hayley Premo %A Linda Tang %A Kira Niederhoffer %A Sarah Reed %A Kaivalya Deshpande %A Emily Sterrett %A Melissa Bauer %A Laurie Snyder %A Afreen Shariff %A David Whellan %A Jeffrey Riggio %A David Gaieski %A Kristin Corey %A Megan Richards %A Michael Gao %A Marshall Nichols %A Bradley Heintze %A William Knechtle %A William Ratliff %A Suresh Balu %B Proceedings of the 7th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2022 %E Zachary Lipton %E Rajesh Ranganath %E Mark Sendak %E Michael Sjoding %E Serena Yeung %F pmlr-v182-sendak22a %I PMLR %P 741--759 %U https://proceedings.mlr.press/v182/sendak22a.html %V 182 %X The approaches by which the machine learning and clinical research communities utilize real world data (RWD), including data captured in the electronic health record (EHR), vary dramatically. While clinical researchers cautiously use RWD for clinical investigations, ML for healthcare teams consume public datasets with minimal scrutiny to develop new algorithms. This study bridges this gap by developing and validating ML-DQA, a data quality assurance framework grounded in RWD best practices. The ML-DQA framework is applied to five ML projects across two geographies, different medical conditions, and different cohorts. A total of 2,999 quality checks and 24 quality reports were generated on RWD gathered on 247,536 patients across the five projects. Five generalizable practices emerge: all projects used a similar method to group redundant data element representations; all projects used automated utilities to build diagnosis and medication data elements; all projects used a common library of rules-based transformations; all projects used a unified approach to assign data quality checks to data elements; and all projects used a similar approach to clinical adjudication. An average of 5.8 individuals, including clinicians, data scientists, and trainees, were involved in implementing ML-DQA for each project and an average of 23.4 data elements per project were either transformed or removed in response to ML-DQA. This study demonstrates the importance role of ML-DQA in healthcare projects and provides teams a framework to conduct these essential activities.
APA
Sendak, M., Sirdeshmukh, G., Ochoa, T., Premo, H., Tang, L., Niederhoffer, K., Reed, S., Deshpande, K., Sterrett, E., Bauer, M., Snyder, L., Shariff, A., Whellan, D., Riggio, J., Gaieski, D., Corey, K., Richards, M., Gao, M., Nichols, M., Heintze, B., Knechtle, W., Ratliff, W. & Balu, S.. (2022). Development and Validation of ML-DQA – a Machine Learning Data Quality Assurance Framework for Healthcare. Proceedings of the 7th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 182:741-759 Available from https://proceedings.mlr.press/v182/sendak22a.html.

Related Material