MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models

Corentin Royer, Bjoern Menze, Anjany Sekuboyina
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:1310-1327, 2024.

Abstract

We introduce MultiMedEval, an open-source toolkit for fair and reproducible evaluation of large, medical vision-language models (VLM). MultiMedEval comprehensively assesses the models’ performance on a broad array of six multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical domains. The chosen tasks and performance metrics are based on their widespread adoption in the community and their diversity, ensuring a thorough evaluation of the model’s overall generalizability. We open-source a Python toolkit (https://anonymous.4open.science/r/MultiMedEval-C780) with a simple interface and setup process, enabling the evaluation of any VLM in just a few lines of code. Our goal is to simplify the intricate landscape of VLM evaluation, thus promoting fair and uniform benchmarking of future VLMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-royer24a, title = {MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models}, author = {Royer, Corentin and Menze, Bjoern and Sekuboyina, Anjany}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {1310--1327}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/royer24a/royer24a.pdf}, url = {https://proceedings.mlr.press/v250/royer24a.html}, abstract = {We introduce MultiMedEval, an open-source toolkit for fair and reproducible evaluation of large, medical vision-language models (VLM). MultiMedEval comprehensively assesses the models’ performance on a broad array of six multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical domains. The chosen tasks and performance metrics are based on their widespread adoption in the community and their diversity, ensuring a thorough evaluation of the model’s overall generalizability. We open-source a Python toolkit (https://anonymous.4open.science/r/MultiMedEval-C780) with a simple interface and setup process, enabling the evaluation of any VLM in just a few lines of code. Our goal is to simplify the intricate landscape of VLM evaluation, thus promoting fair and uniform benchmarking of future VLMs.} }
Endnote
%0 Conference Paper %T MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models %A Corentin Royer %A Bjoern Menze %A Anjany Sekuboyina %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-royer24a %I PMLR %P 1310--1327 %U https://proceedings.mlr.press/v250/royer24a.html %V 250 %X We introduce MultiMedEval, an open-source toolkit for fair and reproducible evaluation of large, medical vision-language models (VLM). MultiMedEval comprehensively assesses the models’ performance on a broad array of six multi-modal tasks, conducted over 23 datasets, and spanning over 11 medical domains. The chosen tasks and performance metrics are based on their widespread adoption in the community and their diversity, ensuring a thorough evaluation of the model’s overall generalizability. We open-source a Python toolkit (https://anonymous.4open.science/r/MultiMedEval-C780) with a simple interface and setup process, enabling the evaluation of any VLM in just a few lines of code. Our goal is to simplify the intricate landscape of VLM evaluation, thus promoting fair and uniform benchmarking of future VLMs.
APA
Royer, C., Menze, B. & Sekuboyina, A.. (2024). MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:1310-1327 Available from https://proceedings.mlr.press/v250/royer24a.html.

Related Material