A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms

Xinwei Zhang, Mingyi Hong, Sairaj Dhople, Nicola Elia
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26206-26222, 2022.

Abstract

In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms—including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning—can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhang22j, title = {A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms}, author = {Zhang, Xinwei and Hong, Mingyi and Dhople, Sairaj and Elia, Nicola}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {26206--26222}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhang22j/zhang22j.pdf}, url = {https://proceedings.mlr.press/v162/zhang22j.html}, abstract = {In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms—including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning—can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.} }
Endnote
%0 Conference Paper %T A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms %A Xinwei Zhang %A Mingyi Hong %A Sairaj Dhople %A Nicola Elia %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhang22j %I PMLR %P 26206--26222 %U https://proceedings.mlr.press/v162/zhang22j.html %V 162 %X In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms—including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning—can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.
APA
Zhang, X., Hong, M., Dhople, S. & Elia, N.. (2022). A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:26206-26222 Available from https://proceedings.mlr.press/v162/zhang22j.html.

Related Material