Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges

Hiba Ahsan, Denis Jered McInerney, Jisoo Kim, Christopher A Potter, Geoffrey Young, Silvio Amir, Byron C Wallace
Proceedings of the fifth Conference on Health, Inference, and Learning, PMLR 248:489-505, 2024.

Abstract

Unstructured data in Electronic Health Records (EHRs) often contains critical information—complementary to imaging—that could inform radiologists’ diagnoses. But the large volume of notes often associated with patients together with time constraints renders manually identifying relevant evidence practically infeasible. In this work we propose and evaluate a zero-shot strategy for using LLMs as a mechanism to efficiently retrieve and summarize unstructured evidence in patient EHR relevant to a given query. Our method entails tasking an LLM to infer whether a patient has, or is at risk of, a particular condition on the basis of associated notes; if so, we ask the model to summarize the supporting evidence. Under expert evaluation, we find that this LLM-based approach provides outputs consistently preferred to a pre-LLM information retrieval baseline. Manual evaluation is expensive, so we also propose and validate a method using an LLM to evaluate (other) LLM outputs for this task, allowing us to scale up evaluation. Our findings indicate the promise of LLMs as interfaces to EHR, but also highlight the outstanding challenge posed by “hallucinations”. In this setting, however, we show that model confidence in outputs strongly correlates with faithful summaries, offering a practical means to limit confabulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v248-ahsan24a, title = {Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges}, author = {Ahsan, Hiba and McInerney, Denis Jered and Kim, Jisoo and Potter, Christopher A and Young, Geoffrey and Amir, Silvio and Wallace, Byron C}, booktitle = {Proceedings of the fifth Conference on Health, Inference, and Learning}, pages = {489--505}, year = {2024}, editor = {Pollard, Tom and Choi, Edward and Singhal, Pankhuri and Hughes, Michael and Sizikova, Elena and Mortazavi, Bobak and Chen, Irene and Wang, Fei and Sarker, Tasmie and McDermott, Matthew and Ghassemi, Marzyeh}, volume = {248}, series = {Proceedings of Machine Learning Research}, month = {27--28 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v248/main/assets/ahsan24a/ahsan24a.pdf}, url = {https://proceedings.mlr.press/v248/ahsan24a.html}, abstract = {Unstructured data in Electronic Health Records (EHRs) often contains critical information—complementary to imaging—that could inform radiologists’ diagnoses. But the large volume of notes often associated with patients together with time constraints renders manually identifying relevant evidence practically infeasible. In this work we propose and evaluate a zero-shot strategy for using LLMs as a mechanism to efficiently retrieve and summarize unstructured evidence in patient EHR relevant to a given query. Our method entails tasking an LLM to infer whether a patient has, or is at risk of, a particular condition on the basis of associated notes; if so, we ask the model to summarize the supporting evidence. Under expert evaluation, we find that this LLM-based approach provides outputs consistently preferred to a pre-LLM information retrieval baseline. Manual evaluation is expensive, so we also propose and validate a method using an LLM to evaluate (other) LLM outputs for this task, allowing us to scale up evaluation. Our findings indicate the promise of LLMs as interfaces to EHR, but also highlight the outstanding challenge posed by “hallucinations”. In this setting, however, we show that model confidence in outputs strongly correlates with faithful summaries, offering a practical means to limit confabulations.} }
Endnote
%0 Conference Paper %T Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges %A Hiba Ahsan %A Denis Jered McInerney %A Jisoo Kim %A Christopher A Potter %A Geoffrey Young %A Silvio Amir %A Byron C Wallace %B Proceedings of the fifth Conference on Health, Inference, and Learning %C Proceedings of Machine Learning Research %D 2024 %E Tom Pollard %E Edward Choi %E Pankhuri Singhal %E Michael Hughes %E Elena Sizikova %E Bobak Mortazavi %E Irene Chen %E Fei Wang %E Tasmie Sarker %E Matthew McDermott %E Marzyeh Ghassemi %F pmlr-v248-ahsan24a %I PMLR %P 489--505 %U https://proceedings.mlr.press/v248/ahsan24a.html %V 248 %X Unstructured data in Electronic Health Records (EHRs) often contains critical information—complementary to imaging—that could inform radiologists’ diagnoses. But the large volume of notes often associated with patients together with time constraints renders manually identifying relevant evidence practically infeasible. In this work we propose and evaluate a zero-shot strategy for using LLMs as a mechanism to efficiently retrieve and summarize unstructured evidence in patient EHR relevant to a given query. Our method entails tasking an LLM to infer whether a patient has, or is at risk of, a particular condition on the basis of associated notes; if so, we ask the model to summarize the supporting evidence. Under expert evaluation, we find that this LLM-based approach provides outputs consistently preferred to a pre-LLM information retrieval baseline. Manual evaluation is expensive, so we also propose and validate a method using an LLM to evaluate (other) LLM outputs for this task, allowing us to scale up evaluation. Our findings indicate the promise of LLMs as interfaces to EHR, but also highlight the outstanding challenge posed by “hallucinations”. In this setting, however, we show that model confidence in outputs strongly correlates with faithful summaries, offering a practical means to limit confabulations.
APA
Ahsan, H., McInerney, D.J., Kim, J., Potter, C.A., Young, G., Amir, S. & Wallace, B.C.. (2024). Retrieving Evidence from EHRs with LLMs: Possibilities and Challenges. Proceedings of the fifth Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 248:489-505 Available from https://proceedings.mlr.press/v248/ahsan24a.html.

Related Material