[edit]
VEIL: A Framework for Differentially Private, Interpretable, and Communication-Efficient Federated Learning
Proceedings of The Second AAAI Bridge Program on AI for Medicine and Healthcare, PMLR 317:275-283, 2026.
Abstract
Federated Learning (FL) promises to unlock the potential of multi-institutional clinical data by enabling collaborative model training without centralizing sensitive patient information. However, practical adoption has been critically hindered by a trifecta of conflicting challenges: ensuring formal patient privacy, overcoming the ”black box” nature of models which erodes clinical trust, and managing the prohibitive communication costs of standard algorithms. In this work, we introduce VEIL (DP–Verified, Efficient, Interpretable, (Federated) Learning), a novel FL framework designed from the ground up to resolve these trade-offs. VEIL employs a federated concept evolution paradigm where clients privately propose salient clinical features, and a global model is constructed from a validated consensus. Our experiments on a real-world, multi-center ICU mortality prediction task demonstrate that VEIL presents a holistically superior solution. The final, calibrated VEIL model achieves competitive discriminative performance (AUC 0.835), on par with strong non-private baselines, while reducing communication overhead by over 90% and attaining best-in-class trustworthiness (ECE 0.010). We showcase VEIL’s primary contribution—deep, instance-level interpretability—through clinical explanation dashboards that translate predictions into transparent, actionable insights. By holistically addressing the core barriers to adoption, VEIL provides a practical and trustworthy pathway for deploying federated learning in real-world medical settings. To facilitate reproducibility and further research, our implementation and the full set of hyperparameters will be made publicly available upon publication.