ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training

Hui-Po Wang, Sebastian Stich, Yang He, Mario Fritz
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:23034-23054, 2022.

Abstract

Federated learning is a powerful distributed learning scheme that allows numerous edge devices to collaboratively train a model without sharing their data. However, training is resource-intensive for edge devices, and limited network bandwidth is often the main bottleneck. Prior work often overcomes the constraints by condensing the models or messages into compact formats, e.g., by gradient compression or distillation. In contrast, we propose ProgFed, the first progressive training framework for efficient and effective federated learning. It inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models. We theoretically prove that ProgFed converges at the same asymptotic rate as standard training on full models. Extensive results on a broad range of architectures, including CNNs (VGG, ResNet, ConvNets) and U-nets, and diverse tasks from simple classification to medical image segmentation show that our highly effective training approach saves up to $20%$ computation and up to $63%$ communication costs for converged models. As our approach is also complimentary to prior work on compression, we can achieve a wide range of trade-offs by combining these techniques, showing reduced communication of up to $50\times$ at only $0.1%$ loss in utility. Code is available at https://github.com/a514514772/ProgFed.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-wang22y, title = {{P}rog{F}ed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training}, author = {Wang, Hui-Po and Stich, Sebastian and He, Yang and Fritz, Mario}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {23034--23054}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/wang22y/wang22y.pdf}, url = {https://proceedings.mlr.press/v162/wang22y.html}, abstract = {Federated learning is a powerful distributed learning scheme that allows numerous edge devices to collaboratively train a model without sharing their data. However, training is resource-intensive for edge devices, and limited network bandwidth is often the main bottleneck. Prior work often overcomes the constraints by condensing the models or messages into compact formats, e.g., by gradient compression or distillation. In contrast, we propose ProgFed, the first progressive training framework for efficient and effective federated learning. It inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models. We theoretically prove that ProgFed converges at the same asymptotic rate as standard training on full models. Extensive results on a broad range of architectures, including CNNs (VGG, ResNet, ConvNets) and U-nets, and diverse tasks from simple classification to medical image segmentation show that our highly effective training approach saves up to $20%$ computation and up to $63%$ communication costs for converged models. As our approach is also complimentary to prior work on compression, we can achieve a wide range of trade-offs by combining these techniques, showing reduced communication of up to $50\times$ at only $0.1%$ loss in utility. Code is available at https://github.com/a514514772/ProgFed.} }
Endnote
%0 Conference Paper %T ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training %A Hui-Po Wang %A Sebastian Stich %A Yang He %A Mario Fritz %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-wang22y %I PMLR %P 23034--23054 %U https://proceedings.mlr.press/v162/wang22y.html %V 162 %X Federated learning is a powerful distributed learning scheme that allows numerous edge devices to collaboratively train a model without sharing their data. However, training is resource-intensive for edge devices, and limited network bandwidth is often the main bottleneck. Prior work often overcomes the constraints by condensing the models or messages into compact formats, e.g., by gradient compression or distillation. In contrast, we propose ProgFed, the first progressive training framework for efficient and effective federated learning. It inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models. We theoretically prove that ProgFed converges at the same asymptotic rate as standard training on full models. Extensive results on a broad range of architectures, including CNNs (VGG, ResNet, ConvNets) and U-nets, and diverse tasks from simple classification to medical image segmentation show that our highly effective training approach saves up to $20%$ computation and up to $63%$ communication costs for converged models. As our approach is also complimentary to prior work on compression, we can achieve a wide range of trade-offs by combining these techniques, showing reduced communication of up to $50\times$ at only $0.1%$ loss in utility. Code is available at https://github.com/a514514772/ProgFed.
APA
Wang, H., Stich, S., He, Y. & Fritz, M.. (2022). ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:23034-23054 Available from https://proceedings.mlr.press/v162/wang22y.html.

Related Material