Federated Functional Gradient Boosting

Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7814-7840, 2022.

Abstract

Motivated by the tremendous success of boosting methods in the standard centralized model of learning, we initiate the theory of boosting in the Federated Learning setting. The primary challenges in the Federated Learning setting are heterogeneity in client data and the requirement that no client data can be transmitted to the server. We develop federated functional gradient boosting (FFGB) an algorithm that is designed to handle these challenges. Under appropriate assumptions on the weak learning oracle, the FFGB algorithm is proved to efficiently converge to certain neighborhoods of the global optimum. The radii of these neighborhoods depend upon the level of heterogeneity measured via the total variation distance and the much tighter Wasserstein-1 distance, and diminish to zero as the setting becomes more homogeneous. In practice, as suggested by our theoretical findings, we propose using FFGB to warm-start existing Federated Learning solvers and observe significant performance boost in highly heterogeneous settings. The code can be found here.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-shen22a, title = { Federated Functional Gradient Boosting }, author = {Shen, Zebang and Hassani, Hamed and Kale, Satyen and Karbasi, Amin}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7814--7840}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/shen22a/shen22a.pdf}, url = {https://proceedings.mlr.press/v151/shen22a.html}, abstract = { Motivated by the tremendous success of boosting methods in the standard centralized model of learning, we initiate the theory of boosting in the Federated Learning setting. The primary challenges in the Federated Learning setting are heterogeneity in client data and the requirement that no client data can be transmitted to the server. We develop federated functional gradient boosting (FFGB) an algorithm that is designed to handle these challenges. Under appropriate assumptions on the weak learning oracle, the FFGB algorithm is proved to efficiently converge to certain neighborhoods of the global optimum. The radii of these neighborhoods depend upon the level of heterogeneity measured via the total variation distance and the much tighter Wasserstein-1 distance, and diminish to zero as the setting becomes more homogeneous. In practice, as suggested by our theoretical findings, we propose using FFGB to warm-start existing Federated Learning solvers and observe significant performance boost in highly heterogeneous settings. The code can be found here. } }
Endnote
%0 Conference Paper %T Federated Functional Gradient Boosting %A Zebang Shen %A Hamed Hassani %A Satyen Kale %A Amin Karbasi %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-shen22a %I PMLR %P 7814--7840 %U https://proceedings.mlr.press/v151/shen22a.html %V 151 %X Motivated by the tremendous success of boosting methods in the standard centralized model of learning, we initiate the theory of boosting in the Federated Learning setting. The primary challenges in the Federated Learning setting are heterogeneity in client data and the requirement that no client data can be transmitted to the server. We develop federated functional gradient boosting (FFGB) an algorithm that is designed to handle these challenges. Under appropriate assumptions on the weak learning oracle, the FFGB algorithm is proved to efficiently converge to certain neighborhoods of the global optimum. The radii of these neighborhoods depend upon the level of heterogeneity measured via the total variation distance and the much tighter Wasserstein-1 distance, and diminish to zero as the setting becomes more homogeneous. In practice, as suggested by our theoretical findings, we propose using FFGB to warm-start existing Federated Learning solvers and observe significant performance boost in highly heterogeneous settings. The code can be found here.
APA
Shen, Z., Hassani, H., Kale, S. & Karbasi, A.. (2022). Federated Functional Gradient Boosting . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7814-7840 Available from https://proceedings.mlr.press/v151/shen22a.html.

Related Material