RadFlag: A Black-Box Hallucination Detection Method for Medical Vision Language Models

Serena Zhang, Sraavya Sambara, Oishi Banerjee, Julian N. Acosta, L. John Fahrner, Pranav Rajpurkar
Proceedings of the 4th Machine Learning for Health Symposium, PMLR 259:1087-1103, 2025.

Abstract

Generating accurate radiology reports from medical images is a clinically important but challenging task. While current vision language models show promise, they are prone to generating hallucinations, potentially compromising patient care. We introduce RadFlag, a black-box method to enhance the accuracy of radiology report generation. Our method uses a sampling-based flagging technique to find hallucinatory generations that should be removed. We first sample multiple reports at varying temperatures and then use a large language model to identify claims that are not consistently supported across samples, indicating that the model has low confidence in those claims. Using a calibrated threshold, we flag a fraction of these claims as likely hallucinations, which should undergo extra review or be automatically rejected. Our method achieves high precision when identifying both individual hallucinatory sentences and reports that contain hallucinations. As an easy-to-use, black-box system that only requires access to a model’s temperature parameter, RadFlag is compatible with a wide range of radiology report generation models and has the potential to broadly improve the quality of automated radiology reporting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v259-zhang25c, title = {RadFlag: A Black-Box Hallucination Detection Method for Medical Vision Language Models}, author = {Zhang, Serena and Sambara, Sraavya and Banerjee, Oishi and Acosta, Julian N. and Fahrner, L. John and Rajpurkar, Pranav}, booktitle = {Proceedings of the 4th Machine Learning for Health Symposium}, pages = {1087--1103}, year = {2025}, editor = {Hegselmann, Stefan and Zhou, Helen and Healey, Elizabeth and Chang, Trenton and Ellington, Caleb and Mhasawade, Vishwali and Tonekaboni, Sana and Argaw, Peniel and Zhang, Haoran}, volume = {259}, series = {Proceedings of Machine Learning Research}, month = {15--16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v259/main/assets/zhang25c/zhang25c.pdf}, url = {https://proceedings.mlr.press/v259/zhang25c.html}, abstract = {Generating accurate radiology reports from medical images is a clinically important but challenging task. While current vision language models show promise, they are prone to generating hallucinations, potentially compromising patient care. We introduce RadFlag, a black-box method to enhance the accuracy of radiology report generation. Our method uses a sampling-based flagging technique to find hallucinatory generations that should be removed. We first sample multiple reports at varying temperatures and then use a large language model to identify claims that are not consistently supported across samples, indicating that the model has low confidence in those claims. Using a calibrated threshold, we flag a fraction of these claims as likely hallucinations, which should undergo extra review or be automatically rejected. Our method achieves high precision when identifying both individual hallucinatory sentences and reports that contain hallucinations. As an easy-to-use, black-box system that only requires access to a model’s temperature parameter, RadFlag is compatible with a wide range of radiology report generation models and has the potential to broadly improve the quality of automated radiology reporting.} }
Endnote
%0 Conference Paper %T RadFlag: A Black-Box Hallucination Detection Method for Medical Vision Language Models %A Serena Zhang %A Sraavya Sambara %A Oishi Banerjee %A Julian N. Acosta %A L. John Fahrner %A Pranav Rajpurkar %B Proceedings of the 4th Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2025 %E Stefan Hegselmann %E Helen Zhou %E Elizabeth Healey %E Trenton Chang %E Caleb Ellington %E Vishwali Mhasawade %E Sana Tonekaboni %E Peniel Argaw %E Haoran Zhang %F pmlr-v259-zhang25c %I PMLR %P 1087--1103 %U https://proceedings.mlr.press/v259/zhang25c.html %V 259 %X Generating accurate radiology reports from medical images is a clinically important but challenging task. While current vision language models show promise, they are prone to generating hallucinations, potentially compromising patient care. We introduce RadFlag, a black-box method to enhance the accuracy of radiology report generation. Our method uses a sampling-based flagging technique to find hallucinatory generations that should be removed. We first sample multiple reports at varying temperatures and then use a large language model to identify claims that are not consistently supported across samples, indicating that the model has low confidence in those claims. Using a calibrated threshold, we flag a fraction of these claims as likely hallucinations, which should undergo extra review or be automatically rejected. Our method achieves high precision when identifying both individual hallucinatory sentences and reports that contain hallucinations. As an easy-to-use, black-box system that only requires access to a model’s temperature parameter, RadFlag is compatible with a wide range of radiology report generation models and has the potential to broadly improve the quality of automated radiology reporting.
APA
Zhang, S., Sambara, S., Banerjee, O., Acosta, J.N., Fahrner, L.J. & Rajpurkar, P.. (2025). RadFlag: A Black-Box Hallucination Detection Method for Medical Vision Language Models. Proceedings of the 4th Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 259:1087-1103 Available from https://proceedings.mlr.press/v259/zhang25c.html.

Related Material