One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning

Avrim Blum, Nika Haghtalab, Richard Lanas Phillips, Han Shao
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1005-1014, 2021.

Abstract

In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents’ incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents’ incentives.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-blum21a, title = {One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning}, author = {Blum, Avrim and Haghtalab, Nika and Phillips, Richard Lanas and Shao, Han}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1005--1014}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/blum21a/blum21a.pdf}, url = {https://proceedings.mlr.press/v139/blum21a.html}, abstract = {In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents’ incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents’ incentives.} }
Endnote
%0 Conference Paper %T One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning %A Avrim Blum %A Nika Haghtalab %A Richard Lanas Phillips %A Han Shao %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-blum21a %I PMLR %P 1005--1014 %U https://proceedings.mlr.press/v139/blum21a.html %V 139 %X In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents’ incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents’ incentives.
APA
Blum, A., Haghtalab, N., Phillips, R.L. & Shao, H.. (2021). One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1005-1014 Available from https://proceedings.mlr.press/v139/blum21a.html.

Related Material