CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks

Chao Pang, Xinzhuo Jiang, Krishna S. Kalluri, Matthew Spotnitz, RuiJun Chen, Adler Perotte, Karthik Natarajan
Proceedings of Machine Learning for Health, PMLR 158:239-260, 2021.

Abstract

Embedding algorithms are increasingly used to represent clinical concepts in healthcare for improving machine learning tasks such as clinical phenotyping and disease prediction. Recent studies have adapted state-of-the-art bidirectional encoder representations from transformers (BERT) architecture to structured electronic health records (EHR) data for the generation of contextualized concept embeddings, yet do not fully incorporate temporal data across multiple clinical domains. Therefore we developed a new BERT adaptation, CEHR-BERT, to incorporate temporal information using a hybrid approach by augmenting the input to BERT using artificial time tokens, incorporating time, age, and concept embeddings, and introducing a new second learning objective for visit type. CEHR-BERT was trained on a subset of clinical data from Columbia University Irving Medical Center-New York Presbyterian Hospital, which includes 2.4M patients, spanning over three decades, and tested using 4-fold evaluation on the following prediction tasks: hospitalization, death, new heart failure (HF) diagnosis, and HF readmission. Our experiments show that CEHR-BERT outperformed existing state-of-the-art clinical BERT adaptations and baseline models across all 4 prediction tasks in both ROC-AUC and PR-AUC. CEHR-BERT also demonstrated strong few-shot learning capability, as our model trained on only 5% of data outperformed comparison models trained on the entire data set. Ablation studies to better understand the contribution of each time component showed incremental gains with every element, suggesting that CEHR-BERT’s incorporation of artificial time tokens, time/age embeddings with concept embeddings, and the addition of the second learning objective represents a promising approach for future BERT-based clinical embeddings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v158-pang21a, title = {CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks}, author = {Pang, Chao and Jiang, Xinzhuo and Kalluri, Krishna S. and Spotnitz, Matthew and Chen, RuiJun and Perotte, Adler and Natarajan, Karthik}, booktitle = {Proceedings of Machine Learning for Health}, pages = {239--260}, year = {2021}, editor = {Roy, Subhrajit and Pfohl, Stephen and Rocheteau, Emma and Tadesse, Girmaw Abebe and Oala, Luis and Falck, Fabian and Zhou, Yuyin and Shen, Liyue and Zamzmi, Ghada and Mugambi, Purity and Zirikly, Ayah and McDermott, Matthew B. A. and Alsentzer, Emily}, volume = {158}, series = {Proceedings of Machine Learning Research}, month = {04 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v158/pang21a/pang21a.pdf}, url = {https://proceedings.mlr.press/v158/pang21a.html}, abstract = {Embedding algorithms are increasingly used to represent clinical concepts in healthcare for improving machine learning tasks such as clinical phenotyping and disease prediction. Recent studies have adapted state-of-the-art bidirectional encoder representations from transformers (BERT) architecture to structured electronic health records (EHR) data for the generation of contextualized concept embeddings, yet do not fully incorporate temporal data across multiple clinical domains. Therefore we developed a new BERT adaptation, CEHR-BERT, to incorporate temporal information using a hybrid approach by augmenting the input to BERT using artificial time tokens, incorporating time, age, and concept embeddings, and introducing a new second learning objective for visit type. CEHR-BERT was trained on a subset of clinical data from Columbia University Irving Medical Center-New York Presbyterian Hospital, which includes 2.4M patients, spanning over three decades, and tested using 4-fold evaluation on the following prediction tasks: hospitalization, death, new heart failure (HF) diagnosis, and HF readmission. Our experiments show that CEHR-BERT outperformed existing state-of-the-art clinical BERT adaptations and baseline models across all 4 prediction tasks in both ROC-AUC and PR-AUC. CEHR-BERT also demonstrated strong few-shot learning capability, as our model trained on only 5% of data outperformed comparison models trained on the entire data set. Ablation studies to better understand the contribution of each time component showed incremental gains with every element, suggesting that CEHR-BERT’s incorporation of artificial time tokens, time/age embeddings with concept embeddings, and the addition of the second learning objective represents a promising approach for future BERT-based clinical embeddings.} }
Endnote
%0 Conference Paper %T CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks %A Chao Pang %A Xinzhuo Jiang %A Krishna S. Kalluri %A Matthew Spotnitz %A RuiJun Chen %A Adler Perotte %A Karthik Natarajan %B Proceedings of Machine Learning for Health %C Proceedings of Machine Learning Research %D 2021 %E Subhrajit Roy %E Stephen Pfohl %E Emma Rocheteau %E Girmaw Abebe Tadesse %E Luis Oala %E Fabian Falck %E Yuyin Zhou %E Liyue Shen %E Ghada Zamzmi %E Purity Mugambi %E Ayah Zirikly %E Matthew B. A. McDermott %E Emily Alsentzer %F pmlr-v158-pang21a %I PMLR %P 239--260 %U https://proceedings.mlr.press/v158/pang21a.html %V 158 %X Embedding algorithms are increasingly used to represent clinical concepts in healthcare for improving machine learning tasks such as clinical phenotyping and disease prediction. Recent studies have adapted state-of-the-art bidirectional encoder representations from transformers (BERT) architecture to structured electronic health records (EHR) data for the generation of contextualized concept embeddings, yet do not fully incorporate temporal data across multiple clinical domains. Therefore we developed a new BERT adaptation, CEHR-BERT, to incorporate temporal information using a hybrid approach by augmenting the input to BERT using artificial time tokens, incorporating time, age, and concept embeddings, and introducing a new second learning objective for visit type. CEHR-BERT was trained on a subset of clinical data from Columbia University Irving Medical Center-New York Presbyterian Hospital, which includes 2.4M patients, spanning over three decades, and tested using 4-fold evaluation on the following prediction tasks: hospitalization, death, new heart failure (HF) diagnosis, and HF readmission. Our experiments show that CEHR-BERT outperformed existing state-of-the-art clinical BERT adaptations and baseline models across all 4 prediction tasks in both ROC-AUC and PR-AUC. CEHR-BERT also demonstrated strong few-shot learning capability, as our model trained on only 5% of data outperformed comparison models trained on the entire data set. Ablation studies to better understand the contribution of each time component showed incremental gains with every element, suggesting that CEHR-BERT’s incorporation of artificial time tokens, time/age embeddings with concept embeddings, and the addition of the second learning objective represents a promising approach for future BERT-based clinical embeddings.
APA
Pang, C., Jiang, X., Kalluri, K.S., Spotnitz, M., Chen, R., Perotte, A. & Natarajan, K.. (2021). CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks. Proceedings of Machine Learning for Health, in Proceedings of Machine Learning Research 158:239-260 Available from https://proceedings.mlr.press/v158/pang21a.html.

Related Material