Democratising Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modelling

Sander Moonemans, Sebastiaan Ram, Frédérique Meeuwsen, Carlijn Lems, Jeroen van der Laak, Geert Litjens, Francesco Ciompi
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:3314-3335, 2026.

Abstract

Vision-language models (VLMs) have the potential to become co-pilots for pathologists. However, most VLMs either focus on small regions of interest within whole-slide images, provide only static slide-level outputs, or rely on data that is not publicly available, limiting reproducibility. Furthermore, training data containing WSIs paired with detailed clinical reports is scarce, restricting progress toward transparent and generalisable VLMs. We address these limitations with three main contributions. First, we introduce Polysome, a standardised tool for synthetic instruction generation. Second, we apply Polysome to the public HISTAI dataset, generating HISTAI-Instruct, a large whole-slide instruction tuning dataset spanning 24,259 slides and over 1.1 million instruction-response pairs. Finally, we use HISTAI-Instruct to train ANTONI-$\alpha$, a VLM capable of visual-question answering (VQA). We show that ANTONI-$\alpha$ outperforms MedGemma on WSI-level VQA tasks of tissue identification, neoplasm detection, and differential diagnosis. We also compare the performance of multiple incarnations of ANTONI-$\alpha$ trained with different amounts of data. All methods, data, and code are publicly availablefn:antoni$^,$ fn:polysome$^,$ fn:histai-instruct.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-moonemans26a, title = {Democratising Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modelling}, author = {Moonemans, Sander and Ram, Sebastiaan and Meeuwsen, Fr{\'e}d{\'e}rique and Lems, Carlijn and van der Laak, Jeroen and Litjens, Geert and Ciompi, Francesco}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {3314--3335}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/moonemans26a/moonemans26a.pdf}, url = {https://proceedings.mlr.press/v315/moonemans26a.html}, abstract = {Vision-language models (VLMs) have the potential to become co-pilots for pathologists. However, most VLMs either focus on small regions of interest within whole-slide images, provide only static slide-level outputs, or rely on data that is not publicly available, limiting reproducibility. Furthermore, training data containing WSIs paired with detailed clinical reports is scarce, restricting progress toward transparent and generalisable VLMs. We address these limitations with three main contributions. First, we introduce Polysome, a standardised tool for synthetic instruction generation. Second, we apply Polysome to the public HISTAI dataset, generating HISTAI-Instruct, a large whole-slide instruction tuning dataset spanning 24,259 slides and over 1.1 million instruction-response pairs. Finally, we use HISTAI-Instruct to train ANTONI-$\alpha$, a VLM capable of visual-question answering (VQA). We show that ANTONI-$\alpha$ outperforms MedGemma on WSI-level VQA tasks of tissue identification, neoplasm detection, and differential diagnosis. We also compare the performance of multiple incarnations of ANTONI-$\alpha$ trained with different amounts of data. All methods, data, and code are publicly availablefn:antoni$^,$ fn:polysome$^,$ fn:histai-instruct.} }
Endnote
%0 Conference Paper %T Democratising Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modelling %A Sander Moonemans %A Sebastiaan Ram %A Frédérique Meeuwsen %A Carlijn Lems %A Jeroen van der Laak %A Geert Litjens %A Francesco Ciompi %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-moonemans26a %I PMLR %P 3314--3335 %U https://proceedings.mlr.press/v315/moonemans26a.html %V 315 %X Vision-language models (VLMs) have the potential to become co-pilots for pathologists. However, most VLMs either focus on small regions of interest within whole-slide images, provide only static slide-level outputs, or rely on data that is not publicly available, limiting reproducibility. Furthermore, training data containing WSIs paired with detailed clinical reports is scarce, restricting progress toward transparent and generalisable VLMs. We address these limitations with three main contributions. First, we introduce Polysome, a standardised tool for synthetic instruction generation. Second, we apply Polysome to the public HISTAI dataset, generating HISTAI-Instruct, a large whole-slide instruction tuning dataset spanning 24,259 slides and over 1.1 million instruction-response pairs. Finally, we use HISTAI-Instruct to train ANTONI-$\alpha$, a VLM capable of visual-question answering (VQA). We show that ANTONI-$\alpha$ outperforms MedGemma on WSI-level VQA tasks of tissue identification, neoplasm detection, and differential diagnosis. We also compare the performance of multiple incarnations of ANTONI-$\alpha$ trained with different amounts of data. All methods, data, and code are publicly availablefn:antoni$^,$ fn:polysome$^,$ fn:histai-instruct.
APA
Moonemans, S., Ram, S., Meeuwsen, F., Lems, C., van der Laak, J., Litjens, G. & Ciompi, F.. (2026). Democratising Pathology Co-Pilots: An Open Pipeline and Dataset for Whole-Slide Vision-Language Modelling. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:3314-3335 Available from https://proceedings.mlr.press/v315/moonemans26a.html.

Related Material