Privacy-Preserving Group Fairness in Cross-Device Federated Learning

Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi
Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation, PMLR 279:173-198, 2025.

Abstract

Group fairness ensures that the outcome of machine learning (ML) based decision making systems are notbiased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achievinggroup fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires usingthe sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not givingaccess to the clients’ data. As we show in this paper, this conflict between fairness and privacy in FL can beresolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). Tothis end, we propose a privacy-preserving approach to calculate group fairness notions in the cross-device FLsetting. Then, we propose two bias mitigation pre-processing and post-processing techniques in cross-deviceFL under formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.Empirical evaluations on real world datasets demonstrate the effectiveness of our solution to train fair andaccurate ML models in federated cross-device setups with privacy guarantees to the users.

Cite this Paper


BibTeX
@InProceedings{pmlr-v279-pentyala25a, title = {Privacy-Preserving Group Fairness in Cross-Device Federated Learning}, author = {Pentyala, Sikha and Neophytou, Nicola and Nascimento, Anderson and De Cock, Martine and Farnadi, Golnoosh}, booktitle = {Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation}, pages = {173--198}, year = {2025}, editor = {Rateike, Miriam and Dieng, Awa and Watson-Daniels, Jamelle and Fioretto, Ferdinando and Farnadi, Golnoosh}, volume = {279}, series = {Proceedings of Machine Learning Research}, month = {14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v279/main/assets/pentyala25a/pentyala25a.pdf}, url = {https://proceedings.mlr.press/v279/pentyala25a.html}, abstract = {Group fairness ensures that the outcome of machine learning (ML) based decision making systems are notbiased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achievinggroup fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires usingthe sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not givingaccess to the clients’ data. As we show in this paper, this conflict between fairness and privacy in FL can beresolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). Tothis end, we propose a privacy-preserving approach to calculate group fairness notions in the cross-device FLsetting. Then, we propose two bias mitigation pre-processing and post-processing techniques in cross-deviceFL under formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.Empirical evaluations on real world datasets demonstrate the effectiveness of our solution to train fair andaccurate ML models in federated cross-device setups with privacy guarantees to the users.} }
Endnote
%0 Conference Paper %T Privacy-Preserving Group Fairness in Cross-Device Federated Learning %A Sikha Pentyala %A Nicola Neophytou %A Anderson Nascimento %A Martine De Cock %A Golnoosh Farnadi %B Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation %C Proceedings of Machine Learning Research %D 2025 %E Miriam Rateike %E Awa Dieng %E Jamelle Watson-Daniels %E Ferdinando Fioretto %E Golnoosh Farnadi %F pmlr-v279-pentyala25a %I PMLR %P 173--198 %U https://proceedings.mlr.press/v279/pentyala25a.html %V 279 %X Group fairness ensures that the outcome of machine learning (ML) based decision making systems are notbiased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achievinggroup fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires usingthe sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not givingaccess to the clients’ data. As we show in this paper, this conflict between fairness and privacy in FL can beresolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). Tothis end, we propose a privacy-preserving approach to calculate group fairness notions in the cross-device FLsetting. Then, we propose two bias mitigation pre-processing and post-processing techniques in cross-deviceFL under formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.Empirical evaluations on real world datasets demonstrate the effectiveness of our solution to train fair andaccurate ML models in federated cross-device setups with privacy guarantees to the users.
APA
Pentyala, S., Neophytou, N., Nascimento, A., De Cock, M. & Farnadi, G.. (2025). Privacy-Preserving Group Fairness in Cross-Device Federated Learning. Proceedings of the Algorithmic Fairness Through the Lens of Metrics and Evaluation, in Proceedings of Machine Learning Research 279:173-198 Available from https://proceedings.mlr.press/v279/pentyala25a.html.

Related Material