You Get What You Give: Reciprocally Fair Federated Learning

Aniket Murhekar, Jiaxin Song, Parnian Shahkar, Bhaskar Ray Chaudhury, Ruta Mehta
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:45289-45310, 2025.

Abstract

Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model. While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution. This leads to undesirable behavior at a Nash equilibrium (NE) such as free-riding, resulting in sub-optimal fairness, data sharing, and welfare. To address this, we design $\mathcal{M}^{Shap}$, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves reciprocal fairness: where each agent’s payoff equals her contribution to the collaboration, as measured by the Shapley share. In addition to fairness, we show that the NE under $\mathcal{M}^{Shap}$ has desirable guarantees in terms of accuracy, welfare, and total data collected. We validate our theoretical results through experiments, demonstrating that $\mathcal{M}^{Shap}$ outperforms baselines in terms of fairness and efficiency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-murhekar25a, title = {You Get What You Give: Reciprocally Fair Federated Learning}, author = {Murhekar, Aniket and Song, Jiaxin and Shahkar, Parnian and Ray Chaudhury, Bhaskar and Mehta, Ruta}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {45289--45310}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/murhekar25a/murhekar25a.pdf}, url = {https://proceedings.mlr.press/v267/murhekar25a.html}, abstract = {Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model. While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution. This leads to undesirable behavior at a Nash equilibrium (NE) such as free-riding, resulting in sub-optimal fairness, data sharing, and welfare. To address this, we design $\mathcal{M}^{Shap}$, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves reciprocal fairness: where each agent’s payoff equals her contribution to the collaboration, as measured by the Shapley share. In addition to fairness, we show that the NE under $\mathcal{M}^{Shap}$ has desirable guarantees in terms of accuracy, welfare, and total data collected. We validate our theoretical results through experiments, demonstrating that $\mathcal{M}^{Shap}$ outperforms baselines in terms of fairness and efficiency.} }
Endnote
%0 Conference Paper %T You Get What You Give: Reciprocally Fair Federated Learning %A Aniket Murhekar %A Jiaxin Song %A Parnian Shahkar %A Bhaskar Ray Chaudhury %A Ruta Mehta %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-murhekar25a %I PMLR %P 45289--45310 %U https://proceedings.mlr.press/v267/murhekar25a.html %V 267 %X Federated learning (FL) is a popular collaborative learning paradigm, whereby agents with individual datasets can jointly train an ML model. While higher data sharing improves model accuracy and leads to higher payoffs, it also raises costs associated with data acquisition or loss of privacy, causing agents to be strategic about their data contribution. This leads to undesirable behavior at a Nash equilibrium (NE) such as free-riding, resulting in sub-optimal fairness, data sharing, and welfare. To address this, we design $\mathcal{M}^{Shap}$, a budget-balanced payment mechanism for FL, that admits Nash equilibria under mild conditions, and achieves reciprocal fairness: where each agent’s payoff equals her contribution to the collaboration, as measured by the Shapley share. In addition to fairness, we show that the NE under $\mathcal{M}^{Shap}$ has desirable guarantees in terms of accuracy, welfare, and total data collected. We validate our theoretical results through experiments, demonstrating that $\mathcal{M}^{Shap}$ outperforms baselines in terms of fairness and efficiency.
APA
Murhekar, A., Song, J., Shahkar, P., Ray Chaudhury, B. & Mehta, R.. (2025). You Get What You Give: Reciprocally Fair Federated Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:45289-45310 Available from https://proceedings.mlr.press/v267/murhekar25a.html.

Related Material