Towards Understanding Biased Client Selection in Federated Learning

Yae Jee Cho, Jianyu Wang, Gauri Joshi
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:10351-10375, 2022.

Abstract

Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Previous works analyzed the convergence of federated learning by accounting of data heterogeneity, communication/computation limitations, and partial client participation. However, most assume unbiased client participation, where clients are selected such that the aggregated model update is unbiased. In our work, we present the convergence analysis of federated learning with biased client selection and quantify how the bias affects convergence speed. We show that biasing client selection towards clients with higher local loss yields faster error convergence. From this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that flexibly spans the trade-off between convergence speed and solution bias. Extensive experiments demonstrate that Power-of-Choice can converge up to 3 times faster and give $10%$ higher test accuracy than the baseline random selection.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-jee-cho22a, title = { Towards Understanding Biased Client Selection in Federated Learning }, author = {Jee Cho, Yae and Wang, Jianyu and Joshi, Gauri}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {10351--10375}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/jee-cho22a/jee-cho22a.pdf}, url = {https://proceedings.mlr.press/v151/jee-cho22a.html}, abstract = { Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Previous works analyzed the convergence of federated learning by accounting of data heterogeneity, communication/computation limitations, and partial client participation. However, most assume unbiased client participation, where clients are selected such that the aggregated model update is unbiased. In our work, we present the convergence analysis of federated learning with biased client selection and quantify how the bias affects convergence speed. We show that biasing client selection towards clients with higher local loss yields faster error convergence. From this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that flexibly spans the trade-off between convergence speed and solution bias. Extensive experiments demonstrate that Power-of-Choice can converge up to 3 times faster and give $10%$ higher test accuracy than the baseline random selection. } }
Endnote
%0 Conference Paper %T Towards Understanding Biased Client Selection in Federated Learning %A Yae Jee Cho %A Jianyu Wang %A Gauri Joshi %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-jee-cho22a %I PMLR %P 10351--10375 %U https://proceedings.mlr.press/v151/jee-cho22a.html %V 151 %X Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Previous works analyzed the convergence of federated learning by accounting of data heterogeneity, communication/computation limitations, and partial client participation. However, most assume unbiased client participation, where clients are selected such that the aggregated model update is unbiased. In our work, we present the convergence analysis of federated learning with biased client selection and quantify how the bias affects convergence speed. We show that biasing client selection towards clients with higher local loss yields faster error convergence. From this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that flexibly spans the trade-off between convergence speed and solution bias. Extensive experiments demonstrate that Power-of-Choice can converge up to 3 times faster and give $10%$ higher test accuracy than the baseline random selection.
APA
Jee Cho, Y., Wang, J. & Joshi, G.. (2022). Towards Understanding Biased Client Selection in Federated Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:10351-10375 Available from https://proceedings.mlr.press/v151/jee-cho22a.html.

Related Material