Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?

Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Peter Glynn, Yinyu Ye, Li-Jia Li, Li Fei-Fei
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5970-5979, 2018.

Abstract

One of the most widely used optimization methods for large-scale machine learning problems is distributed asynchronous stochastic gradient descent (DASGD). However, a key issue that arises here is that of delayed gradients: when a “worker” node asynchronously contributes a gradient update to the “master”, the global model parameter may have changed, rendering this information stale. In massively parallel computing grids, these delays can quickly add up if the computational throughput of a node is saturated, so the convergence of DASGD is uncertain under these conditions. Nevertheless, by using a judiciously chosen quasilinear step-size sequence, we show that it is possible to amortize these delays and achieve global convergence with probability 1, even when the delays grow at a polynomial rate. In this way, our results help reaffirm the successful application of DASGD to large-scale optimization problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-zhou18b, title = {Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?}, author = {Zhou, Zhengyuan and Mertikopoulos, Panayotis and Bambos, Nicholas and Glynn, Peter and Ye, Yinyu and Li, Li-Jia and Fei-Fei, Li}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5970--5979}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/zhou18b/zhou18b.pdf}, url = {https://proceedings.mlr.press/v80/zhou18b.html}, abstract = {One of the most widely used optimization methods for large-scale machine learning problems is distributed asynchronous stochastic gradient descent (DASGD). However, a key issue that arises here is that of delayed gradients: when a “worker” node asynchronously contributes a gradient update to the “master”, the global model parameter may have changed, rendering this information stale. In massively parallel computing grids, these delays can quickly add up if the computational throughput of a node is saturated, so the convergence of DASGD is uncertain under these conditions. Nevertheless, by using a judiciously chosen quasilinear step-size sequence, we show that it is possible to amortize these delays and achieve global convergence with probability 1, even when the delays grow at a polynomial rate. In this way, our results help reaffirm the successful application of DASGD to large-scale optimization problems.} }
Endnote
%0 Conference Paper %T Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go? %A Zhengyuan Zhou %A Panayotis Mertikopoulos %A Nicholas Bambos %A Peter Glynn %A Yinyu Ye %A Li-Jia Li %A Li Fei-Fei %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-zhou18b %I PMLR %P 5970--5979 %U https://proceedings.mlr.press/v80/zhou18b.html %V 80 %X One of the most widely used optimization methods for large-scale machine learning problems is distributed asynchronous stochastic gradient descent (DASGD). However, a key issue that arises here is that of delayed gradients: when a “worker” node asynchronously contributes a gradient update to the “master”, the global model parameter may have changed, rendering this information stale. In massively parallel computing grids, these delays can quickly add up if the computational throughput of a node is saturated, so the convergence of DASGD is uncertain under these conditions. Nevertheless, by using a judiciously chosen quasilinear step-size sequence, we show that it is possible to amortize these delays and achieve global convergence with probability 1, even when the delays grow at a polynomial rate. In this way, our results help reaffirm the successful application of DASGD to large-scale optimization problems.
APA
Zhou, Z., Mertikopoulos, P., Bambos, N., Glynn, P., Ye, Y., Li, L. & Fei-Fei, L.. (2018). Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5970-5979 Available from https://proceedings.mlr.press/v80/zhou18b.html.

Related Material