Federated Asymptotics: a model to compare federated learning algorithms

Gary Cheng, Karan Chadha, John Duchi
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:10650-10689, 2023.

Abstract

We develop an asymptotic framework to compare the test performance of (personalized) federated learning algorithms whose purpose is to move beyond algorithmic convergence arguments. To that end, we study a high-dimensional linear regression model to elucidate the statistical properties (per client test error) of loss minimizers. Our techniques and model allow precise predictions about the benefits of personalization and information sharing in federated scenarios, including that Federated Averaging with simple client fine-tuning achieves identical asymptotic risk to more intricate meta-learning approaches and outperforms naive Federated Averaging. We evaluate and corroborate these theoretical predictions on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-cheng23b, title = {Federated Asymptotics: a model to compare federated learning algorithms}, author = {Cheng, Gary and Chadha, Karan and Duchi, John}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {10650--10689}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/cheng23b/cheng23b.pdf}, url = {https://proceedings.mlr.press/v206/cheng23b.html}, abstract = {We develop an asymptotic framework to compare the test performance of (personalized) federated learning algorithms whose purpose is to move beyond algorithmic convergence arguments. To that end, we study a high-dimensional linear regression model to elucidate the statistical properties (per client test error) of loss minimizers. Our techniques and model allow precise predictions about the benefits of personalization and information sharing in federated scenarios, including that Federated Averaging with simple client fine-tuning achieves identical asymptotic risk to more intricate meta-learning approaches and outperforms naive Federated Averaging. We evaluate and corroborate these theoretical predictions on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.} }
Endnote
%0 Conference Paper %T Federated Asymptotics: a model to compare federated learning algorithms %A Gary Cheng %A Karan Chadha %A John Duchi %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-cheng23b %I PMLR %P 10650--10689 %U https://proceedings.mlr.press/v206/cheng23b.html %V 206 %X We develop an asymptotic framework to compare the test performance of (personalized) federated learning algorithms whose purpose is to move beyond algorithmic convergence arguments. To that end, we study a high-dimensional linear regression model to elucidate the statistical properties (per client test error) of loss minimizers. Our techniques and model allow precise predictions about the benefits of personalization and information sharing in federated scenarios, including that Federated Averaging with simple client fine-tuning achieves identical asymptotic risk to more intricate meta-learning approaches and outperforms naive Federated Averaging. We evaluate and corroborate these theoretical predictions on federated versions of the EMNIST, CIFAR-100, Shakespeare, and Stack Overflow datasets.
APA
Cheng, G., Chadha, K. & Duchi, J.. (2023). Federated Asymptotics: a model to compare federated learning algorithms. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:10650-10689 Available from https://proceedings.mlr.press/v206/cheng23b.html.

Related Material