Fair Federated Learning via the Proportional Veto Core

Bhaskar Ray Chaudhury, Aniket Murhekar, Zhuowen Yuan, Bo Li, Ruta Mehta, Ariel D. Procaccia
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:42245-42257, 2024.

Abstract

Previous work on fairness in federated learning introduced the notion of core stability, which provides utility-based fairness guarantees to any subset of participating agents. However, these guarantees require strong assumptions on agent utilities that render them impractical. To address this shortcoming, we measure the quality of output models in terms of their ordinal rank instead of their cardinal utility, and use this insight to adapt the classical notion of proportional veto core (PVC) from social choice theory to the federated learning setting. We prove that models that are PVC-stable exist in very general learning paradigms, even allowing non-convex model sets, as well as non-convex and non-concave loss functions. We also design Rank-Core-Fed, a distributed federated learning algorithm, to train a PVC-stable model. Finally, we demonstrate that Rank-Core-Fed outperforms baselines in terms of fairness on different datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-ray-chaudhury24a, title = {Fair Federated Learning via the Proportional Veto Core}, author = {Ray Chaudhury, Bhaskar and Murhekar, Aniket and Yuan, Zhuowen and Li, Bo and Mehta, Ruta and Procaccia, Ariel D.}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {42245--42257}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/ray-chaudhury24a/ray-chaudhury24a.pdf}, url = {https://proceedings.mlr.press/v235/ray-chaudhury24a.html}, abstract = {Previous work on fairness in federated learning introduced the notion of core stability, which provides utility-based fairness guarantees to any subset of participating agents. However, these guarantees require strong assumptions on agent utilities that render them impractical. To address this shortcoming, we measure the quality of output models in terms of their ordinal rank instead of their cardinal utility, and use this insight to adapt the classical notion of proportional veto core (PVC) from social choice theory to the federated learning setting. We prove that models that are PVC-stable exist in very general learning paradigms, even allowing non-convex model sets, as well as non-convex and non-concave loss functions. We also design Rank-Core-Fed, a distributed federated learning algorithm, to train a PVC-stable model. Finally, we demonstrate that Rank-Core-Fed outperforms baselines in terms of fairness on different datasets.} }
Endnote
%0 Conference Paper %T Fair Federated Learning via the Proportional Veto Core %A Bhaskar Ray Chaudhury %A Aniket Murhekar %A Zhuowen Yuan %A Bo Li %A Ruta Mehta %A Ariel D. Procaccia %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-ray-chaudhury24a %I PMLR %P 42245--42257 %U https://proceedings.mlr.press/v235/ray-chaudhury24a.html %V 235 %X Previous work on fairness in federated learning introduced the notion of core stability, which provides utility-based fairness guarantees to any subset of participating agents. However, these guarantees require strong assumptions on agent utilities that render them impractical. To address this shortcoming, we measure the quality of output models in terms of their ordinal rank instead of their cardinal utility, and use this insight to adapt the classical notion of proportional veto core (PVC) from social choice theory to the federated learning setting. We prove that models that are PVC-stable exist in very general learning paradigms, even allowing non-convex model sets, as well as non-convex and non-concave loss functions. We also design Rank-Core-Fed, a distributed federated learning algorithm, to train a PVC-stable model. Finally, we demonstrate that Rank-Core-Fed outperforms baselines in terms of fairness on different datasets.
APA
Ray Chaudhury, B., Murhekar, A., Yuan, Z., Li, B., Mehta, R. & Procaccia, A.D.. (2024). Fair Federated Learning via the Proportional Veto Core. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:42245-42257 Available from https://proceedings.mlr.press/v235/ray-chaudhury24a.html.

Related Material