Hierarchical Global Asynchronous Federated Learning Across Multi-Center

Wei Xie, Runqun Xiong, Junzhou Luo
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:543-558, 2025.

Abstract

Federated learning for training machine learning models across geographically distributed regional centers is becoming prevalent. However, because of disparities in location, latency, and computational capabilities, synchronously aggregating models across different sites requires waiting for stragglers, leading to significant delays. Traditional asynchronous aggregation across regional centers still faces issues of stale model parameters and outdated gradients due to the hierarchical aggregation involving local clients within each center. To address this, we propose Hierarchical Global Asynchronous Federated Learning (HGA-FL), which combines global asynchronous model aggregation across regional centers with synchronous aggregation and local consistent regularization alignment within each local center. We theoretically analyze the convergence rate of our method under non-convex optimization settings, demonstrating its stable convergence during the aggregation. Experimental evaluations show that our approach outperforms other baseline two-level aggregation methods in terms of global model generalization ability, particularly under conditions of data heterogeneity, latency, and gradient staleness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-xie25c, title = {Hierarchical Global Asynchronous Federated Learning Across Multi-Center}, author = {Xie, Wei and Xiong, Runqun and Luo, Junzhou}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {543--558}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/xie25c/xie25c.pdf}, url = {https://proceedings.mlr.press/v260/xie25c.html}, abstract = {Federated learning for training machine learning models across geographically distributed regional centers is becoming prevalent. However, because of disparities in location, latency, and computational capabilities, synchronously aggregating models across different sites requires waiting for stragglers, leading to significant delays. Traditional asynchronous aggregation across regional centers still faces issues of stale model parameters and outdated gradients due to the hierarchical aggregation involving local clients within each center. To address this, we propose Hierarchical Global Asynchronous Federated Learning (HGA-FL), which combines global asynchronous model aggregation across regional centers with synchronous aggregation and local consistent regularization alignment within each local center. We theoretically analyze the convergence rate of our method under non-convex optimization settings, demonstrating its stable convergence during the aggregation. Experimental evaluations show that our approach outperforms other baseline two-level aggregation methods in terms of global model generalization ability, particularly under conditions of data heterogeneity, latency, and gradient staleness.} }
Endnote
%0 Conference Paper %T Hierarchical Global Asynchronous Federated Learning Across Multi-Center %A Wei Xie %A Runqun Xiong %A Junzhou Luo %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-xie25c %I PMLR %P 543--558 %U https://proceedings.mlr.press/v260/xie25c.html %V 260 %X Federated learning for training machine learning models across geographically distributed regional centers is becoming prevalent. However, because of disparities in location, latency, and computational capabilities, synchronously aggregating models across different sites requires waiting for stragglers, leading to significant delays. Traditional asynchronous aggregation across regional centers still faces issues of stale model parameters and outdated gradients due to the hierarchical aggregation involving local clients within each center. To address this, we propose Hierarchical Global Asynchronous Federated Learning (HGA-FL), which combines global asynchronous model aggregation across regional centers with synchronous aggregation and local consistent regularization alignment within each local center. We theoretically analyze the convergence rate of our method under non-convex optimization settings, demonstrating its stable convergence during the aggregation. Experimental evaluations show that our approach outperforms other baseline two-level aggregation methods in terms of global model generalization ability, particularly under conditions of data heterogeneity, latency, and gradient staleness.
APA
Xie, W., Xiong, R. & Luo, J.. (2025). Hierarchical Global Asynchronous Federated Learning Across Multi-Center. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:543-558 Available from https://proceedings.mlr.press/v260/xie25c.html.

Related Material