Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation

Oishi Banerjee, Hong-Yu Zhou, Kay Wu, Subathra Adithan, Stephen Kwak, Pranav Rajpurkar
Proceedings of the 9th Machine Learning for Healthcare Conference, PMLR 252, 2024.

Abstract

Recent advances in generative vision-language models (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians’ time and cause patient harm. Drawing on recent work on direct preference optimization (DPO), we propose a simple method for modifying the behavior of pretrained VLMs performing radiology report generation by suppressing unwanted types of generations. We apply our method to the prevention of hallucinations of prior exams, addressing a long-established problem behavior in models performing chest X-ray report generation. Across our experiments, we find that DPO fine-tuning achieves a 3.2-4.8x reduction in lines hallucinating prior exams while maintaining model performance on clinical accuracy metrics. Our work is, to the best of our knowledge, the first work to apply DPO to medical VLMs, providing a data- and compute- efficient way to suppress problem behaviors while maintaining overall clinical accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v252-banerjee24a, title = {Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation}, author = {Banerjee, Oishi and Zhou, Hong-Yu and Wu, Kay and Adithan, Subathra and Kwak, Stephen and Rajpurkar, Pranav}, booktitle = {Proceedings of the 9th Machine Learning for Healthcare Conference}, year = {2024}, editor = {Deshpande, Kaivalya and Fiterau, Madalina and Joshi, Shalmali and Lipton, Zachary and Ranganath, Rajesh and Urteaga, Iñigo}, volume = {252}, series = {Proceedings of Machine Learning Research}, month = {16--17 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v252/main/assets/banerjee24a/banerjee24a.pdf}, url = {https://proceedings.mlr.press/v252/banerjee24a.html}, abstract = {Recent advances in generative vision-language models (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians’ time and cause patient harm. Drawing on recent work on direct preference optimization (DPO), we propose a simple method for modifying the behavior of pretrained VLMs performing radiology report generation by suppressing unwanted types of generations. We apply our method to the prevention of hallucinations of prior exams, addressing a long-established problem behavior in models performing chest X-ray report generation. Across our experiments, we find that DPO fine-tuning achieves a 3.2-4.8x reduction in lines hallucinating prior exams while maintaining model performance on clinical accuracy metrics. Our work is, to the best of our knowledge, the first work to apply DPO to medical VLMs, providing a data- and compute- efficient way to suppress problem behaviors while maintaining overall clinical accuracy.} }
Endnote
%0 Conference Paper %T Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation %A Oishi Banerjee %A Hong-Yu Zhou %A Kay Wu %A Subathra Adithan %A Stephen Kwak %A Pranav Rajpurkar %B Proceedings of the 9th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2024 %E Kaivalya Deshpande %E Madalina Fiterau %E Shalmali Joshi %E Zachary Lipton %E Rajesh Ranganath %E Iñigo Urteaga %F pmlr-v252-banerjee24a %I PMLR %U https://proceedings.mlr.press/v252/banerjee24a.html %V 252 %X Recent advances in generative vision-language models (VLMs) have exciting potential implications for AI in radiology, yet VLMs are also known to produce hallucinations, nonsensical text, and other unwanted behaviors that can waste clinicians’ time and cause patient harm. Drawing on recent work on direct preference optimization (DPO), we propose a simple method for modifying the behavior of pretrained VLMs performing radiology report generation by suppressing unwanted types of generations. We apply our method to the prevention of hallucinations of prior exams, addressing a long-established problem behavior in models performing chest X-ray report generation. Across our experiments, we find that DPO fine-tuning achieves a 3.2-4.8x reduction in lines hallucinating prior exams while maintaining model performance on clinical accuracy metrics. Our work is, to the best of our knowledge, the first work to apply DPO to medical VLMs, providing a data- and compute- efficient way to suppress problem behaviors while maintaining overall clinical accuracy.
APA
Banerjee, O., Zhou, H., Wu, K., Adithan, S., Kwak, S. & Rajpurkar, P.. (2024). Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation. Proceedings of the 9th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 252 Available from https://proceedings.mlr.press/v252/banerjee24a.html.

Related Material