CORE-BEHRT: A Carefully Optimized and Rigorously Evaluated BEHRT

Mikkel Fruelund Odgaard, Kiril Vadimovic Klein, Martin Sillesen, Sanne Møller Thysen, Espen Jimenez-Solem, Mads Nielsen
Proceedings of the 9th Machine Learning for Healthcare Conference, PMLR 252, 2024.

Abstract

The widespread adoption of Electronic Health Records (EHR) has significantly increased the amount of available healthcare data. This has allowed models inspired by Natural Language Processing (NLP) and Computer Vision, which scale exceptionally well, to be used in EHR research. Particularly, BERT-based models have surged in popularity following the release of BEHRT and Med-BERT. Subsequent models have largely built on these foundations despite the fundamental design choices of these pioneering models remaining underexplored. Through incremental optimization, we study BERT-based EHR modeling and isolate the sources of improvement for key design choices, giving us insights into the effect of data representation, individual technical components, and training procedure. Evaluating this across a set of generic tasks (death, pain treatment, and general infection), we showed that improving data representation can increase the average downstream performance from 0.785 to 0.797 AUROC ($p < 10^{-7}$), primarily when including medication and timestamps. Improving the architecture and training protocol on top of this increased average downstream performance to 0.801 AUROC ($p < 10^{-7}$). We then demonstrated the consistency of our optimization through a rigorous evaluation across 25 diverse clinical prediction tasks. We observed significant performance increases in 17 out of 25 tasks and improvements in 24 tasks, highlighting the generalizability of our results. Our findings provide a strong foundation for future work and aim to increase the trustworthiness of BERT-based EHR models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v252-odgaard24a, title = {{CORE}-{BEHRT}: A Carefully Optimized and Rigorously Evaluated {BEHRT}}, author = {Odgaard, Mikkel Fruelund and Klein, Kiril Vadimovic and Sillesen, Martin and Thysen, Sanne M{\o}ller and Jimenez-Solem, Espen and Nielsen, Mads}, booktitle = {Proceedings of the 9th Machine Learning for Healthcare Conference}, year = {2024}, editor = {Deshpande, Kaivalya and Fiterau, Madalina and Joshi, Shalmali and Lipton, Zachary and Ranganath, Rajesh and Urteaga, Iñigo}, volume = {252}, series = {Proceedings of Machine Learning Research}, month = {16--17 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v252/main/assets/odgaard24a/odgaard24a.pdf}, url = {https://proceedings.mlr.press/v252/odgaard24a.html}, abstract = {The widespread adoption of Electronic Health Records (EHR) has significantly increased the amount of available healthcare data. This has allowed models inspired by Natural Language Processing (NLP) and Computer Vision, which scale exceptionally well, to be used in EHR research. Particularly, BERT-based models have surged in popularity following the release of BEHRT and Med-BERT. Subsequent models have largely built on these foundations despite the fundamental design choices of these pioneering models remaining underexplored. Through incremental optimization, we study BERT-based EHR modeling and isolate the sources of improvement for key design choices, giving us insights into the effect of data representation, individual technical components, and training procedure. Evaluating this across a set of generic tasks (death, pain treatment, and general infection), we showed that improving data representation can increase the average downstream performance from 0.785 to 0.797 AUROC ($p < 10^{-7}$), primarily when including medication and timestamps. Improving the architecture and training protocol on top of this increased average downstream performance to 0.801 AUROC ($p < 10^{-7}$). We then demonstrated the consistency of our optimization through a rigorous evaluation across 25 diverse clinical prediction tasks. We observed significant performance increases in 17 out of 25 tasks and improvements in 24 tasks, highlighting the generalizability of our results. Our findings provide a strong foundation for future work and aim to increase the trustworthiness of BERT-based EHR models.} }
Endnote
%0 Conference Paper %T CORE-BEHRT: A Carefully Optimized and Rigorously Evaluated BEHRT %A Mikkel Fruelund Odgaard %A Kiril Vadimovic Klein %A Martin Sillesen %A Sanne Møller Thysen %A Espen Jimenez-Solem %A Mads Nielsen %B Proceedings of the 9th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2024 %E Kaivalya Deshpande %E Madalina Fiterau %E Shalmali Joshi %E Zachary Lipton %E Rajesh Ranganath %E Iñigo Urteaga %F pmlr-v252-odgaard24a %I PMLR %U https://proceedings.mlr.press/v252/odgaard24a.html %V 252 %X The widespread adoption of Electronic Health Records (EHR) has significantly increased the amount of available healthcare data. This has allowed models inspired by Natural Language Processing (NLP) and Computer Vision, which scale exceptionally well, to be used in EHR research. Particularly, BERT-based models have surged in popularity following the release of BEHRT and Med-BERT. Subsequent models have largely built on these foundations despite the fundamental design choices of these pioneering models remaining underexplored. Through incremental optimization, we study BERT-based EHR modeling and isolate the sources of improvement for key design choices, giving us insights into the effect of data representation, individual technical components, and training procedure. Evaluating this across a set of generic tasks (death, pain treatment, and general infection), we showed that improving data representation can increase the average downstream performance from 0.785 to 0.797 AUROC ($p < 10^{-7}$), primarily when including medication and timestamps. Improving the architecture and training protocol on top of this increased average downstream performance to 0.801 AUROC ($p < 10^{-7}$). We then demonstrated the consistency of our optimization through a rigorous evaluation across 25 diverse clinical prediction tasks. We observed significant performance increases in 17 out of 25 tasks and improvements in 24 tasks, highlighting the generalizability of our results. Our findings provide a strong foundation for future work and aim to increase the trustworthiness of BERT-based EHR models.
APA
Odgaard, M.F., Klein, K.V., Sillesen, M., Thysen, S.M., Jimenez-Solem, E. & Nielsen, M.. (2024). CORE-BEHRT: A Carefully Optimized and Rigorously Evaluated BEHRT. Proceedings of the 9th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 252 Available from https://proceedings.mlr.press/v252/odgaard24a.html.

Related Material