Evaluating Reasoning Faithfulness in Medical Vision-Language Models using Multimodal Perturbations

Johannes Moll, Markus Graf, Tristan Lemke, Nicolas Lenhart, Daniel Truhn, Jean-Benoit Delbrouck, Jiazhen Pan, Daniel Rueckert, Lisa C. Adams, Keno K. Bressem
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:424-448, 2026.

Abstract

Vision-language models ({VLM}s) often produce chain-of-thought ({CoT}) explanations that sound plausible yet fail to reflect the underlying decision process, undermining trust in high-stakes clinical use. Existing evaluations rarely catch this misalignment, prioritizing answer accuracy or adherence to formats. We present a clinically grounded framework for chest X-ray visual question answering ({VQA}) that probes {CoT} faithfulness via controlled text and image modifications across three axes: clinical fidelity, causal attribution, and confidence calibration. In a reader study (n=4), evaluator-radiologist correlations fall within the observed inter-radiologist range for all axes, with strong alignment for attribution (Kendall’s tau-b = 0.670), moderate alignment for fidelity (tau-b = 0.387), and weak alignment for confidence tone (tau-b = 0.091), which we report with caution. Benchmarking six {VLM}s shows that answer accuracy and explanation quality can be decoupled, acknowledging injected cues does not ensure grounding, and text cues shift explanations more than visual cues. While some open-source models match final answer accuracy, proprietary models score higher on attribution (25.0% vs. 1.4%) and often on fidelity (36.1% vs. 31.7%), highlighting deployment risks and the need to evaluate beyond final answer accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-moll26a, title = {Evaluating Reasoning Faithfulness in Medical Vision-Language Models using Multimodal Perturbations}, author = {Moll, Johannes and Graf, Markus and Lemke, Tristan and Lenhart, Nicolas and Truhn, Daniel and Delbrouck, Jean-Benoit and Pan, Jiazhen and Rueckert, Daniel and Adams, Lisa C. and Bressem, Keno K.}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {424--448}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/moll26a/moll26a.pdf}, url = {https://proceedings.mlr.press/v297/moll26a.html}, abstract = {Vision-language models ({VLM}s) often produce chain-of-thought ({CoT}) explanations that sound plausible yet fail to reflect the underlying decision process, undermining trust in high-stakes clinical use. Existing evaluations rarely catch this misalignment, prioritizing answer accuracy or adherence to formats. We present a clinically grounded framework for chest X-ray visual question answering ({VQA}) that probes {CoT} faithfulness via controlled text and image modifications across three axes: clinical fidelity, causal attribution, and confidence calibration. In a reader study (n=4), evaluator-radiologist correlations fall within the observed inter-radiologist range for all axes, with strong alignment for attribution (Kendall’s tau-b = 0.670), moderate alignment for fidelity (tau-b = 0.387), and weak alignment for confidence tone (tau-b = 0.091), which we report with caution. Benchmarking six {VLM}s shows that answer accuracy and explanation quality can be decoupled, acknowledging injected cues does not ensure grounding, and text cues shift explanations more than visual cues. While some open-source models match final answer accuracy, proprietary models score higher on attribution (25.0% vs. 1.4%) and often on fidelity (36.1% vs. 31.7%), highlighting deployment risks and the need to evaluate beyond final answer accuracy.} }
Endnote
%0 Conference Paper %T Evaluating Reasoning Faithfulness in Medical Vision-Language Models using Multimodal Perturbations %A Johannes Moll %A Markus Graf %A Tristan Lemke %A Nicolas Lenhart %A Daniel Truhn %A Jean-Benoit Delbrouck %A Jiazhen Pan %A Daniel Rueckert %A Lisa C. Adams %A Keno K. Bressem %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-moll26a %I PMLR %P 424--448 %U https://proceedings.mlr.press/v297/moll26a.html %V 297 %X Vision-language models ({VLM}s) often produce chain-of-thought ({CoT}) explanations that sound plausible yet fail to reflect the underlying decision process, undermining trust in high-stakes clinical use. Existing evaluations rarely catch this misalignment, prioritizing answer accuracy or adherence to formats. We present a clinically grounded framework for chest X-ray visual question answering ({VQA}) that probes {CoT} faithfulness via controlled text and image modifications across three axes: clinical fidelity, causal attribution, and confidence calibration. In a reader study (n=4), evaluator-radiologist correlations fall within the observed inter-radiologist range for all axes, with strong alignment for attribution (Kendall’s tau-b = 0.670), moderate alignment for fidelity (tau-b = 0.387), and weak alignment for confidence tone (tau-b = 0.091), which we report with caution. Benchmarking six {VLM}s shows that answer accuracy and explanation quality can be decoupled, acknowledging injected cues does not ensure grounding, and text cues shift explanations more than visual cues. While some open-source models match final answer accuracy, proprietary models score higher on attribution (25.0% vs. 1.4%) and often on fidelity (36.1% vs. 31.7%), highlighting deployment risks and the need to evaluate beyond final answer accuracy.
APA
Moll, J., Graf, M., Lemke, T., Lenhart, N., Truhn, D., Delbrouck, J., Pan, J., Rueckert, D., Adams, L.C. & Bressem, K.K.. (2026). Evaluating Reasoning Faithfulness in Medical Vision-Language Models using Multimodal Perturbations. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:424-448 Available from https://proceedings.mlr.press/v297/moll26a.html.

Related Material