Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question Answering

Awais Naeem, Tianhao Li, Huang-Ru Liao, Jiawei Xu, Aby Mammen Mathew, Zehao Zhu, Zhen Tan, Ajay Kumar Jaiswal, Raffi A. Salibian, Ziniu Hu, Tianlong Chen, Ying Ding
Proceedings of the 4th Machine Learning for Health Symposium, PMLR 259:735-746, 2025.

Abstract

Accurate diagnosis and prognosis assisted by pathology images are essential for cancer treatment selection and planning. Despite the recent trend of adopting deep-learning approaches for analyzing complex pathology images, they fall short as they often overlook the domain-expert understanding of tissue structure and cell composition. In this work, we focus on a challenging Open-ended Pathology VQA (PathVQA-Open) task and propose a novel framework named Path-RAG, which leverages HistoCartography to retrieve relevant domain knowledge from pathology images and significantly improves performance on PathVQA-Open. Admitting the complexity of pathology image analysis, Path-RAG adopts a human-centered AI approach by retrieving domain knowledge using HistoCartography to select the relevant patches from pathology images. Our experiments suggest that domain guidance can significantly boost the accuracy of LLaVA-Med from 38% to 47%, with a notable gain of 28% for H&E-stained pathology images in the PathVQA-Open dataset. For longer-form question and answer pairs, our model consistently achieves significant improvements of 32.5% in ARCH-Open PubMed and 30.6% in ARCH-Open Books on H&E images. All our relevant codes and datasets will be open-sourced.

Cite this Paper


BibTeX
@InProceedings{pmlr-v259-naeem25a, title = {Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question Answering}, author = {Naeem, Awais and Li, Tianhao and Liao, Huang-Ru and Xu, Jiawei and Mathew, Aby Mammen and Zhu, Zehao and Tan, Zhen and Jaiswal, Ajay Kumar and Salibian, Raffi A. and Hu, Ziniu and Chen, Tianlong and Ding, Ying}, booktitle = {Proceedings of the 4th Machine Learning for Health Symposium}, pages = {735--746}, year = {2025}, editor = {Hegselmann, Stefan and Zhou, Helen and Healey, Elizabeth and Chang, Trenton and Ellington, Caleb and Mhasawade, Vishwali and Tonekaboni, Sana and Argaw, Peniel and Zhang, Haoran}, volume = {259}, series = {Proceedings of Machine Learning Research}, month = {15--16 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v259/main/assets/naeem25a/naeem25a.pdf}, url = {https://proceedings.mlr.press/v259/naeem25a.html}, abstract = {Accurate diagnosis and prognosis assisted by pathology images are essential for cancer treatment selection and planning. Despite the recent trend of adopting deep-learning approaches for analyzing complex pathology images, they fall short as they often overlook the domain-expert understanding of tissue structure and cell composition. In this work, we focus on a challenging Open-ended Pathology VQA (PathVQA-Open) task and propose a novel framework named Path-RAG, which leverages HistoCartography to retrieve relevant domain knowledge from pathology images and significantly improves performance on PathVQA-Open. Admitting the complexity of pathology image analysis, Path-RAG adopts a human-centered AI approach by retrieving domain knowledge using HistoCartography to select the relevant patches from pathology images. Our experiments suggest that domain guidance can significantly boost the accuracy of LLaVA-Med from 38% to 47%, with a notable gain of 28% for H&E-stained pathology images in the PathVQA-Open dataset. For longer-form question and answer pairs, our model consistently achieves significant improvements of 32.5% in ARCH-Open PubMed and 30.6% in ARCH-Open Books on H&E images. All our relevant codes and datasets will be open-sourced.} }
Endnote
%0 Conference Paper %T Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question Answering %A Awais Naeem %A Tianhao Li %A Huang-Ru Liao %A Jiawei Xu %A Aby Mammen Mathew %A Zehao Zhu %A Zhen Tan %A Ajay Kumar Jaiswal %A Raffi A. Salibian %A Ziniu Hu %A Tianlong Chen %A Ying Ding %B Proceedings of the 4th Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2025 %E Stefan Hegselmann %E Helen Zhou %E Elizabeth Healey %E Trenton Chang %E Caleb Ellington %E Vishwali Mhasawade %E Sana Tonekaboni %E Peniel Argaw %E Haoran Zhang %F pmlr-v259-naeem25a %I PMLR %P 735--746 %U https://proceedings.mlr.press/v259/naeem25a.html %V 259 %X Accurate diagnosis and prognosis assisted by pathology images are essential for cancer treatment selection and planning. Despite the recent trend of adopting deep-learning approaches for analyzing complex pathology images, they fall short as they often overlook the domain-expert understanding of tissue structure and cell composition. In this work, we focus on a challenging Open-ended Pathology VQA (PathVQA-Open) task and propose a novel framework named Path-RAG, which leverages HistoCartography to retrieve relevant domain knowledge from pathology images and significantly improves performance on PathVQA-Open. Admitting the complexity of pathology image analysis, Path-RAG adopts a human-centered AI approach by retrieving domain knowledge using HistoCartography to select the relevant patches from pathology images. Our experiments suggest that domain guidance can significantly boost the accuracy of LLaVA-Med from 38% to 47%, with a notable gain of 28% for H&E-stained pathology images in the PathVQA-Open dataset. For longer-form question and answer pairs, our model consistently achieves significant improvements of 32.5% in ARCH-Open PubMed and 30.6% in ARCH-Open Books on H&E images. All our relevant codes and datasets will be open-sourced.
APA
Naeem, A., Li, T., Liao, H., Xu, J., Mathew, A.M., Zhu, Z., Tan, Z., Jaiswal, A.K., Salibian, R.A., Hu, Z., Chen, T. & Ding, Y.. (2025). Path-RAG: Knowledge-Guided Key Region Retrieval for Open-ended Pathology Visual Question Answering. Proceedings of the 4th Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 259:735-746 Available from https://proceedings.mlr.press/v259/naeem25a.html.

Related Material