[edit]
Uncertainty-aware Cycle Diffusion Model for Fair Glaucoma Diagnosis
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:1029-1043, 2026.
Abstract
Fairness has become a critical ethical concern, particularly in AI-based healthcare applications. Data imbalance and limited sample size can lead to lower diagnostic performance. Consequently, this harms the fairness of AI when applied to real-world scenarios. Generative models, like diffusion models, offer a promising solution by generating diverse synthetic data to support underrepresented groups. This improves fairness and performance while mitigating privacy risks. We propose a shape-controlled framework that incorporates demographic information into an end-to-end diffusion model, along with an automatic selection strategy to identify overconfidently misclassified samples. These challenging samples are then augmented via the generative model to enhance its classification performance. The strategy also removes potentially misleading “lower-quality” synthetic samples. Two ophthalmic experts validated the clinical relevance and plausibility of our synthetic images through random external examination. Our method outperforms state-of-the-art methods on the Harvard-FairVLMed dataset in both fairness and diagnosis accuracy.