Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression

Sijin Chen, Zhize Li, Yuejie Chi
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2701-2709, 2024.

Abstract

We consider the problem of finding second-order stationary points in the optimization of heterogeneous federated learning (FL). Previous works in FL mostly focus on first-order convergence guarantees, which do not rule out the scenario of unstable saddle points. Meanwhile, it is a key bottleneck of FL to achieve communication efficiency without compensating the learning accuracy, especially when local data are highly heterogeneous across different clients. Given this, we propose a novel algorithm PowerEF-SGD that only communicates compressed information via a novel error-feedback scheme. To our knowledge, PowerEF-SGD is the first distributed and compressed SGD algorithm that provably escapes saddle points in heterogeneous FL without any data homogeneity assumptions. In particular, PowerEF-SGD improves to second-order stationary points after visiting first-order (possibly saddle) points, using additional gradient queries and communication rounds only of almost the same order required by first-order convergence, and the convergence rate shows a linear-speedup pattern in terms of the number of workers. Our theory improves/recovers previous results, while extending to much more tolerant settings on the local data. Numerical experiments are provided to complement the theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-chen24d, title = { Escaping Saddle Points in Heterogeneous Federated Learning via Distributed {SGD} with Communication Compression }, author = {Chen, Sijin and Li, Zhize and Chi, Yuejie}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {2701--2709}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/chen24d/chen24d.pdf}, url = {https://proceedings.mlr.press/v238/chen24d.html}, abstract = { We consider the problem of finding second-order stationary points in the optimization of heterogeneous federated learning (FL). Previous works in FL mostly focus on first-order convergence guarantees, which do not rule out the scenario of unstable saddle points. Meanwhile, it is a key bottleneck of FL to achieve communication efficiency without compensating the learning accuracy, especially when local data are highly heterogeneous across different clients. Given this, we propose a novel algorithm PowerEF-SGD that only communicates compressed information via a novel error-feedback scheme. To our knowledge, PowerEF-SGD is the first distributed and compressed SGD algorithm that provably escapes saddle points in heterogeneous FL without any data homogeneity assumptions. In particular, PowerEF-SGD improves to second-order stationary points after visiting first-order (possibly saddle) points, using additional gradient queries and communication rounds only of almost the same order required by first-order convergence, and the convergence rate shows a linear-speedup pattern in terms of the number of workers. Our theory improves/recovers previous results, while extending to much more tolerant settings on the local data. Numerical experiments are provided to complement the theory. } }
Endnote
%0 Conference Paper %T Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression %A Sijin Chen %A Zhize Li %A Yuejie Chi %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-chen24d %I PMLR %P 2701--2709 %U https://proceedings.mlr.press/v238/chen24d.html %V 238 %X We consider the problem of finding second-order stationary points in the optimization of heterogeneous federated learning (FL). Previous works in FL mostly focus on first-order convergence guarantees, which do not rule out the scenario of unstable saddle points. Meanwhile, it is a key bottleneck of FL to achieve communication efficiency without compensating the learning accuracy, especially when local data are highly heterogeneous across different clients. Given this, we propose a novel algorithm PowerEF-SGD that only communicates compressed information via a novel error-feedback scheme. To our knowledge, PowerEF-SGD is the first distributed and compressed SGD algorithm that provably escapes saddle points in heterogeneous FL without any data homogeneity assumptions. In particular, PowerEF-SGD improves to second-order stationary points after visiting first-order (possibly saddle) points, using additional gradient queries and communication rounds only of almost the same order required by first-order convergence, and the convergence rate shows a linear-speedup pattern in terms of the number of workers. Our theory improves/recovers previous results, while extending to much more tolerant settings on the local data. Numerical experiments are provided to complement the theory.
APA
Chen, S., Li, Z. & Chi, Y.. (2024). Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:2701-2709 Available from https://proceedings.mlr.press/v238/chen24d.html.

Related Material