Robust Benchmarking for Machine Learning of Clinical Entity Extraction

Monica Agrawal, Chloe O’Connell, Yasmin Fatemi, Ariel Levy, David Sontag
Proceedings of the 5th Machine Learning for Healthcare Conference, PMLR 126:928-949, 2020.

Abstract

Clinical studies often require understanding elements of a patient’s narrative that exist only in free text clinical notes. To transform notes into structured data for downstream use, these elements are commonly extracted and normalized to medical vocabularies. In this work, we audit the performance of and indicate areas of improvement for state-of-the-art systems. We find that high task accuracies for clinical entity normalization systems on the 2019 n2c2 Shared Task are misleading, and underlying performance is still brittle. Normalization accuracy is high for common concepts (95.3%), but much lower for concepts unseen in training data (69.3%). We demonstrate that current approaches are hindered in part by inconsistencies in medical vocabularies, limitations of existing labeling schemas, and narrow evaluation techniques. We reformulate the annotation framework for clinical entity extraction to factor in these issues to allow for robust end-to-end system benchmarking. We evaluate concordance of annotations from our new framework between two annotators and achieve a Jaccard similarity of 0.73 for entity recognition and an agreement of 0.83 for entity normalization. We propose a path forward to address the demonstrated need for the creation of a reference standard to spur method development in entity recognition and normalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v126-agrawal20a, title = {Robust Benchmarking for Machine Learning of Clinical Entity Extraction}, author = {Agrawal, Monica and O'Connell, Chloe and Fatemi, Yasmin and Levy, Ariel and Sontag, David}, booktitle = {Proceedings of the 5th Machine Learning for Healthcare Conference}, pages = {928--949}, year = {2020}, editor = {Doshi-Velez, Finale and Fackler, Jim and Jung, Ken and Kale, David and Ranganath, Rajesh and Wallace, Byron and Wiens, Jenna}, volume = {126}, series = {Proceedings of Machine Learning Research}, month = {07--08 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v126/agrawal20a/agrawal20a.pdf}, url = {https://proceedings.mlr.press/v126/agrawal20a.html}, abstract = {Clinical studies often require understanding elements of a patient’s narrative that exist only in free text clinical notes. To transform notes into structured data for downstream use, these elements are commonly extracted and normalized to medical vocabularies. In this work, we audit the performance of and indicate areas of improvement for state-of-the-art systems. We find that high task accuracies for clinical entity normalization systems on the 2019 n2c2 Shared Task are misleading, and underlying performance is still brittle. Normalization accuracy is high for common concepts (95.3%), but much lower for concepts unseen in training data (69.3%). We demonstrate that current approaches are hindered in part by inconsistencies in medical vocabularies, limitations of existing labeling schemas, and narrow evaluation techniques. We reformulate the annotation framework for clinical entity extraction to factor in these issues to allow for robust end-to-end system benchmarking. We evaluate concordance of annotations from our new framework between two annotators and achieve a Jaccard similarity of 0.73 for entity recognition and an agreement of 0.83 for entity normalization. We propose a path forward to address the demonstrated need for the creation of a reference standard to spur method development in entity recognition and normalization.} }
Endnote
%0 Conference Paper %T Robust Benchmarking for Machine Learning of Clinical Entity Extraction %A Monica Agrawal %A Chloe O’Connell %A Yasmin Fatemi %A Ariel Levy %A David Sontag %B Proceedings of the 5th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2020 %E Finale Doshi-Velez %E Jim Fackler %E Ken Jung %E David Kale %E Rajesh Ranganath %E Byron Wallace %E Jenna Wiens %F pmlr-v126-agrawal20a %I PMLR %P 928--949 %U https://proceedings.mlr.press/v126/agrawal20a.html %V 126 %X Clinical studies often require understanding elements of a patient’s narrative that exist only in free text clinical notes. To transform notes into structured data for downstream use, these elements are commonly extracted and normalized to medical vocabularies. In this work, we audit the performance of and indicate areas of improvement for state-of-the-art systems. We find that high task accuracies for clinical entity normalization systems on the 2019 n2c2 Shared Task are misleading, and underlying performance is still brittle. Normalization accuracy is high for common concepts (95.3%), but much lower for concepts unseen in training data (69.3%). We demonstrate that current approaches are hindered in part by inconsistencies in medical vocabularies, limitations of existing labeling schemas, and narrow evaluation techniques. We reformulate the annotation framework for clinical entity extraction to factor in these issues to allow for robust end-to-end system benchmarking. We evaluate concordance of annotations from our new framework between two annotators and achieve a Jaccard similarity of 0.73 for entity recognition and an agreement of 0.83 for entity normalization. We propose a path forward to address the demonstrated need for the creation of a reference standard to spur method development in entity recognition and normalization.
APA
Agrawal, M., O’Connell, C., Fatemi, Y., Levy, A. & Sontag, D.. (2020). Robust Benchmarking for Machine Learning of Clinical Entity Extraction. Proceedings of the 5th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 126:928-949 Available from https://proceedings.mlr.press/v126/agrawal20a.html.

Related Material