On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation

Ruben T. Lucassen, Tijn van de Luijtgaarden, Sander P. J. Moonemans, Gerben E. Breimer, Willeke A. M. Blokx, Mitko Veta
Proceedings of the MICCAI Workshop on Computational Pathology, PMLR 316:1-11, 2026.

Abstract

Vision-language models in pathology enable multimodal case retrieval and automated report generation. Many of the models developed so far, however, have been trained on pathology reports that include information which cannot be inferred from paired whole slide images (e.g., patient history), potentially leading to hallucinated sentences in generated reports. To this end, we investigate how the selection of information from pathology reports for vision-language modeling affects the quality of the multimodal representations and generated reports. More concretely, we compare a model trained on full reports against a model trained on preprocessed reports that only include sentences describing the cell and tissue appearances based on the H&E-stained slides. For the experiments, we built upon the BLIP-2 framework and used a cutaneous melanocytic lesion dataset of 42,433 H&E-stained whole slide images and 19,636 corresponding pathology reports. Model performance was assessed using image-to-text and text-to-image retrieval, as well as qualitative evaluation of the generated reports by an expert pathologist. Our results demonstrate that text preprocessing prevents hallucination in report generation. Despite the improvement in the quality of the generated reports, training the vision-language model on full reports showed better cross-modal retrieval performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v316-lucassen26a, title = {On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation}, author = {Lucassen, Ruben T. and Luijtgaarden, Tijn van de and Moonemans, Sander P. J. and Breimer, Gerben E. and Blokx, Willeke A. M. and Veta, Mitko}, booktitle = {Proceedings of the MICCAI Workshop on Computational Pathology}, pages = {1--11}, year = {2026}, editor = {Studer, Linda and Ciompi, Francesco and Khalili, Nadieh and Faryna, Khrystyna and Faryna, Khrystyna and Yeong, Joe and Lau, Mai Chan and Chen, Hao and Liu, Ziyi and Brattoli, Biagio}, volume = {316}, series = {Proceedings of Machine Learning Research}, month = {27 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v316/main/assets/lucassen26a/lucassen26a.pdf}, url = {https://proceedings.mlr.press/v316/lucassen26a.html}, abstract = {Vision-language models in pathology enable multimodal case retrieval and automated report generation. Many of the models developed so far, however, have been trained on pathology reports that include information which cannot be inferred from paired whole slide images (e.g., patient history), potentially leading to hallucinated sentences in generated reports. To this end, we investigate how the selection of information from pathology reports for vision-language modeling affects the quality of the multimodal representations and generated reports. More concretely, we compare a model trained on full reports against a model trained on preprocessed reports that only include sentences describing the cell and tissue appearances based on the H&E-stained slides. For the experiments, we built upon the BLIP-2 framework and used a cutaneous melanocytic lesion dataset of 42,433 H&E-stained whole slide images and 19,636 corresponding pathology reports. Model performance was assessed using image-to-text and text-to-image retrieval, as well as qualitative evaluation of the generated reports by an expert pathologist. Our results demonstrate that text preprocessing prevents hallucination in report generation. Despite the improvement in the quality of the generated reports, training the vision-language model on full reports showed better cross-modal retrieval performance. } }
Endnote
%0 Conference Paper %T On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation %A Ruben T. Lucassen %A Tijn van de Luijtgaarden %A Sander P. J. Moonemans %A Gerben E. Breimer %A Willeke A. M. Blokx %A Mitko Veta %B Proceedings of the MICCAI Workshop on Computational Pathology %C Proceedings of Machine Learning Research %D 2026 %E Linda Studer %E Francesco Ciompi %E Nadieh Khalili %E Khrystyna Faryna %E Khrystyna Faryna %E Joe Yeong %E Mai Chan Lau %E Hao Chen %E Ziyi Liu %E Biagio Brattoli %F pmlr-v316-lucassen26a %I PMLR %P 1--11 %U https://proceedings.mlr.press/v316/lucassen26a.html %V 316 %X Vision-language models in pathology enable multimodal case retrieval and automated report generation. Many of the models developed so far, however, have been trained on pathology reports that include information which cannot be inferred from paired whole slide images (e.g., patient history), potentially leading to hallucinated sentences in generated reports. To this end, we investigate how the selection of information from pathology reports for vision-language modeling affects the quality of the multimodal representations and generated reports. More concretely, we compare a model trained on full reports against a model trained on preprocessed reports that only include sentences describing the cell and tissue appearances based on the H&E-stained slides. For the experiments, we built upon the BLIP-2 framework and used a cutaneous melanocytic lesion dataset of 42,433 H&E-stained whole slide images and 19,636 corresponding pathology reports. Model performance was assessed using image-to-text and text-to-image retrieval, as well as qualitative evaluation of the generated reports by an expert pathologist. Our results demonstrate that text preprocessing prevents hallucination in report generation. Despite the improvement in the quality of the generated reports, training the vision-language model on full reports showed better cross-modal retrieval performance.
APA
Lucassen, R.T., Luijtgaarden, T.v.d., Moonemans, S.P.J., Breimer, G.E., Blokx, W.A.M. & Veta, M.. (2026). On the Importance of Text Preprocessing for Multimodal Representation Learning and Pathology Report Generation. Proceedings of the MICCAI Workshop on Computational Pathology, in Proceedings of Machine Learning Research 316:1-11 Available from https://proceedings.mlr.press/v316/lucassen26a.html.

Related Material