Asynchronous Distributed ADMM for Consensus Optimization

Ruiliang Zhang, James Kwok
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1701-1709, 2014.

Abstract

Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-zhange14, title = {Asynchronous Distributed ADMM for Consensus Optimization}, author = {Zhang, Ruiliang and Kwok, James}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1701--1709}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/zhange14.pdf}, url = {https://proceedings.mlr.press/v32/zhange14.html}, abstract = {Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.} }
Endnote
%0 Conference Paper %T Asynchronous Distributed ADMM for Consensus Optimization %A Ruiliang Zhang %A James Kwok %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-zhange14 %I PMLR %P 1701--1709 %U https://proceedings.mlr.press/v32/zhange14.html %V 32 %N 2 %X Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.
RIS
TY - CPAPER TI - Asynchronous Distributed ADMM for Consensus Optimization AU - Ruiliang Zhang AU - James Kwok BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-zhange14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1701 EP - 1709 L1 - http://proceedings.mlr.press/v32/zhange14.pdf UR - https://proceedings.mlr.press/v32/zhange14.html AB - Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time. ER -
APA
Zhang, R. & Kwok, J.. (2014). Asynchronous Distributed ADMM for Consensus Optimization. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1701-1709 Available from https://proceedings.mlr.press/v32/zhange14.html.

Related Material