DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Mohamed Ali Souibgui, Changkyu Choi, Andrey Barsky, Kangsoo Jung, Ernest Valveny, Dimosthenis Karatzas
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:56549-56569, 2025.

Abstract

We propose DocVXQA, a novel framework for visually self-explainable document question answering, where the goal is not only to produce accurate answers to questions but also to learn visual heatmaps that highlight critical regions, offering interpretable justifications for the model decision. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning criteria. Unlike conventional relevance map methods that solely emphasize regions relevant to the answer, our context-aware DocVXQA delivers explanations that are contextually sufficient yet representation-efficient. This fosters user trust while achieving a balance between predictive performance and interpretability in document visual question answering applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-souibgui25a, title = {{D}oc{VXQA}: Context-Aware Visual Explanations for Document Question Answering}, author = {Souibgui, Mohamed Ali and Choi, Changkyu and Barsky, Andrey and Jung, Kangsoo and Valveny, Ernest and Karatzas, Dimosthenis}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {56549--56569}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/souibgui25a/souibgui25a.pdf}, url = {https://proceedings.mlr.press/v267/souibgui25a.html}, abstract = {We propose DocVXQA, a novel framework for visually self-explainable document question answering, where the goal is not only to produce accurate answers to questions but also to learn visual heatmaps that highlight critical regions, offering interpretable justifications for the model decision. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning criteria. Unlike conventional relevance map methods that solely emphasize regions relevant to the answer, our context-aware DocVXQA delivers explanations that are contextually sufficient yet representation-efficient. This fosters user trust while achieving a balance between predictive performance and interpretability in document visual question answering applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method.} }
Endnote
%0 Conference Paper %T DocVXQA: Context-Aware Visual Explanations for Document Question Answering %A Mohamed Ali Souibgui %A Changkyu Choi %A Andrey Barsky %A Kangsoo Jung %A Ernest Valveny %A Dimosthenis Karatzas %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-souibgui25a %I PMLR %P 56549--56569 %U https://proceedings.mlr.press/v267/souibgui25a.html %V 267 %X We propose DocVXQA, a novel framework for visually self-explainable document question answering, where the goal is not only to produce accurate answers to questions but also to learn visual heatmaps that highlight critical regions, offering interpretable justifications for the model decision. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning criteria. Unlike conventional relevance map methods that solely emphasize regions relevant to the answer, our context-aware DocVXQA delivers explanations that are contextually sufficient yet representation-efficient. This fosters user trust while achieving a balance between predictive performance and interpretability in document visual question answering applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method.
APA
Souibgui, M.A., Choi, C., Barsky, A., Jung, K., Valveny, E. & Karatzas, D.. (2025). DocVXQA: Context-Aware Visual Explanations for Document Question Answering. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:56549-56569 Available from https://proceedings.mlr.press/v267/souibgui25a.html.

Related Material