Distributed Online Optimization over a Heterogeneous Network with Any-Batch Mirror Descent

Nima Eshraghi, Ben Liang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2933-2942, 2020.

Abstract

In distributed online optimization over a computing network with heterogeneous nodes, slow nodes can adversely affect the progress of fast nodes, leading to drastic slowdown of the overall convergence process. To address this issue, we consider a new algorithm termed Distributed Any-Batch Mirror Descent (DABMD), which is based on distributed Mirror Descent but uses a fixed per-round computing time to limit the waiting by fast nodes to receive information updates from slow nodes. DABMD is characterized by varying minibatch sizes across nodes. It is applicable to a broader range of problems compared with existing distributed online optimization methods such as those based on dual averaging, and it accommodates time-varying network topology. We study two versions of DABMD, depending on whether the computing nodes average their primal variables via single or multiple consensus iterations. We show that both versions provide strong theoretical performance guarantee, by deriving upperbounds on their expected dynamic regret, which capture the variability in minibatch sizes. Our experimental results show substantial reduction in cost and acceleration in convergence compared with the known best alternative.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-eshraghi20a, title = {Distributed Online Optimization over a Heterogeneous Network with Any-Batch Mirror Descent}, author = {Eshraghi, Nima and Liang, Ben}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2933--2942}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/eshraghi20a/eshraghi20a.pdf}, url = {https://proceedings.mlr.press/v119/eshraghi20a.html}, abstract = {In distributed online optimization over a computing network with heterogeneous nodes, slow nodes can adversely affect the progress of fast nodes, leading to drastic slowdown of the overall convergence process. To address this issue, we consider a new algorithm termed Distributed Any-Batch Mirror Descent (DABMD), which is based on distributed Mirror Descent but uses a fixed per-round computing time to limit the waiting by fast nodes to receive information updates from slow nodes. DABMD is characterized by varying minibatch sizes across nodes. It is applicable to a broader range of problems compared with existing distributed online optimization methods such as those based on dual averaging, and it accommodates time-varying network topology. We study two versions of DABMD, depending on whether the computing nodes average their primal variables via single or multiple consensus iterations. We show that both versions provide strong theoretical performance guarantee, by deriving upperbounds on their expected dynamic regret, which capture the variability in minibatch sizes. Our experimental results show substantial reduction in cost and acceleration in convergence compared with the known best alternative.} }
Endnote
%0 Conference Paper %T Distributed Online Optimization over a Heterogeneous Network with Any-Batch Mirror Descent %A Nima Eshraghi %A Ben Liang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-eshraghi20a %I PMLR %P 2933--2942 %U https://proceedings.mlr.press/v119/eshraghi20a.html %V 119 %X In distributed online optimization over a computing network with heterogeneous nodes, slow nodes can adversely affect the progress of fast nodes, leading to drastic slowdown of the overall convergence process. To address this issue, we consider a new algorithm termed Distributed Any-Batch Mirror Descent (DABMD), which is based on distributed Mirror Descent but uses a fixed per-round computing time to limit the waiting by fast nodes to receive information updates from slow nodes. DABMD is characterized by varying minibatch sizes across nodes. It is applicable to a broader range of problems compared with existing distributed online optimization methods such as those based on dual averaging, and it accommodates time-varying network topology. We study two versions of DABMD, depending on whether the computing nodes average their primal variables via single or multiple consensus iterations. We show that both versions provide strong theoretical performance guarantee, by deriving upperbounds on their expected dynamic regret, which capture the variability in minibatch sizes. Our experimental results show substantial reduction in cost and acceleration in convergence compared with the known best alternative.
APA
Eshraghi, N. & Liang, B.. (2020). Distributed Online Optimization over a Heterogeneous Network with Any-Batch Mirror Descent. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2933-2942 Available from https://proceedings.mlr.press/v119/eshraghi20a.html.

Related Material