EFIQA: Explainable Fundus Image Quality Assessment via Anatomical Priors

Pengwei Wang, José Morano, Qian Wan, Hrvoje Bogunović
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:2248-2264, 2026.

Abstract

Image quality control is vital for a wide range of downstream applications. Deep learning-based image quality assessment methods typically train classifiers on dataset-specific quality labels, inheriting two limitations: (1) generalization is tied to the labeling criteria of the training set and (2) these methods cannot provide spatial feedback on where the quality is degraded, lacking explainability. In this work, we propose EFIQA, a framework that requires no quality-related supervision and produces spatial quality maps by design. Rather than learning “what is degradation" from human-annotated labels, EFIQA learns “what should be there" by leveraging anatomical priors. For fundus photography, we instantiate this as a two-stage approach, by first training an unsupervised anomaly detector via masked anatomical inpainting to identify regions of missing vasculature, and then distilling this prior knowledge into a shallow adapter mapping features of a frozen foundation model to precise quality maps. External-dataset evaluation demonstrates that this label-free approach with minimal adaptation achieves better performance and explainability compared with supervised methods across benchmarks with different quality criteria, highlighting its potential for real-world applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-wang26f, title = {EFIQA: Explainable Fundus Image Quality Assessment via Anatomical Priors}, author = {Wang, Pengwei and Morano, Jos\'{e} and Wan, Qian and Bogunovi\'c, Hrvoje}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {2248--2264}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/wang26f/wang26f.pdf}, url = {https://proceedings.mlr.press/v315/wang26f.html}, abstract = {Image quality control is vital for a wide range of downstream applications. Deep learning-based image quality assessment methods typically train classifiers on dataset-specific quality labels, inheriting two limitations: (1) generalization is tied to the labeling criteria of the training set and (2) these methods cannot provide spatial feedback on where the quality is degraded, lacking explainability. In this work, we propose EFIQA, a framework that requires no quality-related supervision and produces spatial quality maps by design. Rather than learning “what is degradation" from human-annotated labels, EFIQA learns “what should be there" by leveraging anatomical priors. For fundus photography, we instantiate this as a two-stage approach, by first training an unsupervised anomaly detector via masked anatomical inpainting to identify regions of missing vasculature, and then distilling this prior knowledge into a shallow adapter mapping features of a frozen foundation model to precise quality maps. External-dataset evaluation demonstrates that this label-free approach with minimal adaptation achieves better performance and explainability compared with supervised methods across benchmarks with different quality criteria, highlighting its potential for real-world applications.} }
Endnote
%0 Conference Paper %T EFIQA: Explainable Fundus Image Quality Assessment via Anatomical Priors %A Pengwei Wang %A José Morano %A Qian Wan %A Hrvoje Bogunović %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-wang26f %I PMLR %P 2248--2264 %U https://proceedings.mlr.press/v315/wang26f.html %V 315 %X Image quality control is vital for a wide range of downstream applications. Deep learning-based image quality assessment methods typically train classifiers on dataset-specific quality labels, inheriting two limitations: (1) generalization is tied to the labeling criteria of the training set and (2) these methods cannot provide spatial feedback on where the quality is degraded, lacking explainability. In this work, we propose EFIQA, a framework that requires no quality-related supervision and produces spatial quality maps by design. Rather than learning “what is degradation" from human-annotated labels, EFIQA learns “what should be there" by leveraging anatomical priors. For fundus photography, we instantiate this as a two-stage approach, by first training an unsupervised anomaly detector via masked anatomical inpainting to identify regions of missing vasculature, and then distilling this prior knowledge into a shallow adapter mapping features of a frozen foundation model to precise quality maps. External-dataset evaluation demonstrates that this label-free approach with minimal adaptation achieves better performance and explainability compared with supervised methods across benchmarks with different quality criteria, highlighting its potential for real-world applications.
APA
Wang, P., Morano, J., Wan, Q. & Bogunović, H.. (2026). EFIQA: Explainable Fundus Image Quality Assessment via Anatomical Priors. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:2248-2264 Available from https://proceedings.mlr.press/v315/wang26f.html.

Related Material