[edit]
Equitable Electronic Health Record Prediction with FAME: Fairness-Aware Multimodal Embedding
Proceedings of the 10th Machine Learning for Healthcare Conference, PMLR 298, 2025.
Abstract
Electronic Health Record (EHR) data encompasses diverse modalities—text, images, and medical codes—that are vital for clinical decision-making. To process these complex data, multimodal AI (MAI) has emerged as a powerful approach for fusing such information. However, most existing MAI models optimize for better prediction performance, potentially reinforcing biases across patient subgroups. Although bias reduction techniques for multimodal models have been proposed, the individual strengths of each modality and their interplay in both reducing bias and optimizing performance remain underexplored. In this work, we introduce FAME (Fairness-Aware Multimodal Embeddings), a framework that explicitly weights each modality according to its fairness contribution. FAME optimizes both performance and fairness by incorporating a combined loss function. We leverage the Error Distribution Disparity Index (EDDI) to measure fairness across subgroups and propose an RMS-based (root mean square) aggregation method to balance fairness across subgroups, ensuring equitable model outcomes. We evaluate FAME with BEHRT and BioClinicalBERT, combining structured and unstructured EHR data, and demonstrate its effectiveness in terms of performance and fairness compared to other baselines across multiple EHR prediction tasks.