Resilient and Communication Efficient Learning for Heterogeneous Federated Systems

Zhuangdi Zhu, Junyuan Hong, Steve Drew, Jiayu Zhou
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:27504-27526, 2022.

Abstract

The rise of Federated Learning (FL) is bringing machine learning to edge computing by utilizing data scattered across edge devices. However, the heterogeneity of edge network topologies and the uncertainty of wireless transmission are two major obstructions of FL’s wide application in edge computing, leading to prohibitive convergence time and high communication cost. In this work, we propose an FL scheme to address both challenges simultaneously. Specifically, we enable edge devices to learn self-distilled neural networks that are readily prunable to arbitrary sizes, which capture the knowledge of the learning domain in a nested and progressive manner. Not only does our approach tackle system heterogeneity by serving edge devices with varying model architectures, but it also alleviates the issue of connection uncertainty by allowing transmitting part of the model parameters under faulty network connections, without wasting the contributing knowledge of the transmitted parameters. Extensive empirical studies show that under system heterogeneity and network instability, our approach demonstrates significant resilience and higher communication efficiency compared to the state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhu22e, title = {Resilient and Communication Efficient Learning for Heterogeneous Federated Systems}, author = {Zhu, Zhuangdi and Hong, Junyuan and Drew, Steve and Zhou, Jiayu}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {27504--27526}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhu22e/zhu22e.pdf}, url = {https://proceedings.mlr.press/v162/zhu22e.html}, abstract = {The rise of Federated Learning (FL) is bringing machine learning to edge computing by utilizing data scattered across edge devices. However, the heterogeneity of edge network topologies and the uncertainty of wireless transmission are two major obstructions of FL’s wide application in edge computing, leading to prohibitive convergence time and high communication cost. In this work, we propose an FL scheme to address both challenges simultaneously. Specifically, we enable edge devices to learn self-distilled neural networks that are readily prunable to arbitrary sizes, which capture the knowledge of the learning domain in a nested and progressive manner. Not only does our approach tackle system heterogeneity by serving edge devices with varying model architectures, but it also alleviates the issue of connection uncertainty by allowing transmitting part of the model parameters under faulty network connections, without wasting the contributing knowledge of the transmitted parameters. Extensive empirical studies show that under system heterogeneity and network instability, our approach demonstrates significant resilience and higher communication efficiency compared to the state-of-the-art.} }
Endnote
%0 Conference Paper %T Resilient and Communication Efficient Learning for Heterogeneous Federated Systems %A Zhuangdi Zhu %A Junyuan Hong %A Steve Drew %A Jiayu Zhou %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhu22e %I PMLR %P 27504--27526 %U https://proceedings.mlr.press/v162/zhu22e.html %V 162 %X The rise of Federated Learning (FL) is bringing machine learning to edge computing by utilizing data scattered across edge devices. However, the heterogeneity of edge network topologies and the uncertainty of wireless transmission are two major obstructions of FL’s wide application in edge computing, leading to prohibitive convergence time and high communication cost. In this work, we propose an FL scheme to address both challenges simultaneously. Specifically, we enable edge devices to learn self-distilled neural networks that are readily prunable to arbitrary sizes, which capture the knowledge of the learning domain in a nested and progressive manner. Not only does our approach tackle system heterogeneity by serving edge devices with varying model architectures, but it also alleviates the issue of connection uncertainty by allowing transmitting part of the model parameters under faulty network connections, without wasting the contributing knowledge of the transmitted parameters. Extensive empirical studies show that under system heterogeneity and network instability, our approach demonstrates significant resilience and higher communication efficiency compared to the state-of-the-art.
APA
Zhu, Z., Hong, J., Drew, S. & Zhou, J.. (2022). Resilient and Communication Efficient Learning for Heterogeneous Federated Systems. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:27504-27526 Available from https://proceedings.mlr.press/v162/zhu22e.html.

Related Material