Equitable Electronic Health Record Prediction with FAME: Fairness-Aware Multimodal Embedding

Nikkie Hooman, Zhongjie Wu, Eric C. Larson, Mehak Gupta
Proceedings of the 10th Machine Learning for Healthcare Conference, PMLR 298, 2025.

Abstract

Electronic Health Record (EHR) data encompasses diverse modalities—text, images, and medical codes—that are vital for clinical decision-making. To process these complex data, multimodal AI (MAI) has emerged as a powerful approach for fusing such information. However, most existing MAI models optimize for better prediction performance, potentially reinforcing biases across patient subgroups. Although bias reduction techniques for multimodal models have been proposed, the individual strengths of each modality and their interplay in both reducing bias and optimizing performance remain underexplored. In this work, we introduce FAME (Fairness-Aware Multimodal Embeddings), a framework that explicitly weights each modality according to its fairness contribution. FAME optimizes both performance and fairness by incorporating a combined loss function. We leverage the Error Distribution Disparity Index (EDDI) to measure fairness across subgroups and propose an RMS-based (root mean square) aggregation method to balance fairness across subgroups, ensuring equitable model outcomes. We evaluate FAME with BEHRT and BioClinicalBERT, combining structured and unstructured EHR data, and demonstrate its effectiveness in terms of performance and fairness compared to other baselines across multiple EHR prediction tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v298-hooman25a, title = {Equitable Electronic Health Record Prediction with {FAME}: Fairness-Aware Multimodal Embedding}, author = {Hooman, Nikkie and Wu, Zhongjie and Larson, Eric C. and Gupta, Mehak}, booktitle = {Proceedings of the 10th Machine Learning for Healthcare Conference}, year = {2025}, editor = {Agrawal, Monica and Deshpande, Kaivalya and Engelhard, Matthew and Joshi, Shalmali and Tang, Shengpu and Urteaga, Iñigo}, volume = {298}, series = {Proceedings of Machine Learning Research}, month = {15--16 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v298/main/assets/hooman25a/hooman25a.pdf}, url = {https://proceedings.mlr.press/v298/hooman25a.html}, abstract = {Electronic Health Record (EHR) data encompasses diverse modalities—text, images, and medical codes—that are vital for clinical decision-making. To process these complex data, multimodal AI (MAI) has emerged as a powerful approach for fusing such information. However, most existing MAI models optimize for better prediction performance, potentially reinforcing biases across patient subgroups. Although bias reduction techniques for multimodal models have been proposed, the individual strengths of each modality and their interplay in both reducing bias and optimizing performance remain underexplored. In this work, we introduce FAME (Fairness-Aware Multimodal Embeddings), a framework that explicitly weights each modality according to its fairness contribution. FAME optimizes both performance and fairness by incorporating a combined loss function. We leverage the Error Distribution Disparity Index (EDDI) to measure fairness across subgroups and propose an RMS-based (root mean square) aggregation method to balance fairness across subgroups, ensuring equitable model outcomes. We evaluate FAME with BEHRT and BioClinicalBERT, combining structured and unstructured EHR data, and demonstrate its effectiveness in terms of performance and fairness compared to other baselines across multiple EHR prediction tasks.} }
Endnote
%0 Conference Paper %T Equitable Electronic Health Record Prediction with FAME: Fairness-Aware Multimodal Embedding %A Nikkie Hooman %A Zhongjie Wu %A Eric C. Larson %A Mehak Gupta %B Proceedings of the 10th Machine Learning for Healthcare Conference %C Proceedings of Machine Learning Research %D 2025 %E Monica Agrawal %E Kaivalya Deshpande %E Matthew Engelhard %E Shalmali Joshi %E Shengpu Tang %E Iñigo Urteaga %F pmlr-v298-hooman25a %I PMLR %U https://proceedings.mlr.press/v298/hooman25a.html %V 298 %X Electronic Health Record (EHR) data encompasses diverse modalities—text, images, and medical codes—that are vital for clinical decision-making. To process these complex data, multimodal AI (MAI) has emerged as a powerful approach for fusing such information. However, most existing MAI models optimize for better prediction performance, potentially reinforcing biases across patient subgroups. Although bias reduction techniques for multimodal models have been proposed, the individual strengths of each modality and their interplay in both reducing bias and optimizing performance remain underexplored. In this work, we introduce FAME (Fairness-Aware Multimodal Embeddings), a framework that explicitly weights each modality according to its fairness contribution. FAME optimizes both performance and fairness by incorporating a combined loss function. We leverage the Error Distribution Disparity Index (EDDI) to measure fairness across subgroups and propose an RMS-based (root mean square) aggregation method to balance fairness across subgroups, ensuring equitable model outcomes. We evaluate FAME with BEHRT and BioClinicalBERT, combining structured and unstructured EHR data, and demonstrate its effectiveness in terms of performance and fairness compared to other baselines across multiple EHR prediction tasks.
APA
Hooman, N., Wu, Z., Larson, E.C. & Gupta, M.. (2025). Equitable Electronic Health Record Prediction with FAME: Fairness-Aware Multimodal Embedding. Proceedings of the 10th Machine Learning for Healthcare Conference, in Proceedings of Machine Learning Research 298 Available from https://proceedings.mlr.press/v298/hooman25a.html.

Related Material