Do We Still Need Clinical Language Models?

Eric Lehman, Evan Hernandez, Diwakar Mahajan, Jonas Wulff, Micah J Smith, Zachary Ziegler, Daniel Nadler, Peter Szolovits, Alistair Johnson, Emily Alsentzer
Proceedings of the Conference on Health, Inference, and Learning, PMLR 209:578-597, 2023.

Abstract

Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as \emph{clinical text}. Recent results have suggested that LLMs encode a surprising amount of medical knowledge. This raises an important question regarding the utility of smaller domain-specific language models. With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. As part of our experiments, we train T5-Base and T5-Large models from scratch on clinical notes from MIMIC III and IV to directly investigate the efficiency of clinical tokens. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We release the code and the models used under the PhysioNet Credentialed Health Data license and data use agreement.\footnote{\href{https://github.com/elehman16/clinical_llm}{https://github.com/elehman16/clinical_llm}}

Cite this Paper


BibTeX
@InProceedings{pmlr-v209-eric23a, title = {Do We Still Need Clinical Language Models?}, author = {Lehman, Eric and Hernandez, Evan and Mahajan, Diwakar and Wulff, Jonas and Smith, Micah J and Ziegler, Zachary and Nadler, Daniel and Szolovits, Peter and Johnson, Alistair and Alsentzer, Emily}, booktitle = {Proceedings of the Conference on Health, Inference, and Learning}, pages = {578--597}, year = {2023}, editor = {Mortazavi, Bobak J. and Sarker, Tasmie and Beam, Andrew and Ho, Joyce C.}, volume = {209}, series = {Proceedings of Machine Learning Research}, month = {22 Jun--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v209/eric23a/eric23a.pdf}, url = {https://proceedings.mlr.press/v209/eric23a.html}, abstract = {Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as \emph{clinical text}. Recent results have suggested that LLMs encode a surprising amount of medical knowledge. This raises an important question regarding the utility of smaller domain-specific language models. With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. As part of our experiments, we train T5-Base and T5-Large models from scratch on clinical notes from MIMIC III and IV to directly investigate the efficiency of clinical tokens. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We release the code and the models used under the PhysioNet Credentialed Health Data license and data use agreement.\footnote{\href{https://github.com/elehman16/clinical_llm}{https://github.com/elehman16/clinical_llm}}} }
Endnote
%0 Conference Paper %T Do We Still Need Clinical Language Models? %A Eric Lehman %A Evan Hernandez %A Diwakar Mahajan %A Jonas Wulff %A Micah J Smith %A Zachary Ziegler %A Daniel Nadler %A Peter Szolovits %A Alistair Johnson %A Emily Alsentzer %B Proceedings of the Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2023 %E Bobak J. Mortazavi %E Tasmie Sarker %E Andrew Beam %E Joyce C. Ho %F pmlr-v209-eric23a %I PMLR %P 578--597 %U https://proceedings.mlr.press/v209/eric23a.html %V 209 %X Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as \emph{clinical text}. Recent results have suggested that LLMs encode a surprising amount of medical knowledge. This raises an important question regarding the utility of smaller domain-specific language models. With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. As part of our experiments, we train T5-Base and T5-Large models from scratch on clinical notes from MIMIC III and IV to directly investigate the efficiency of clinical tokens. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We release the code and the models used under the PhysioNet Credentialed Health Data license and data use agreement.\footnote{\href{https://github.com/elehman16/clinical_llm}{https://github.com/elehman16/clinical_llm}}
APA
Lehman, E., Hernandez, E., Mahajan, D., Wulff, J., Smith, M.J., Ziegler, Z., Nadler, D., Szolovits, P., Johnson, A. & Alsentzer, E.. (2023). Do We Still Need Clinical Language Models?. Proceedings of the Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 209:578-597 Available from https://proceedings.mlr.press/v209/eric23a.html.

Related Material