Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction

Tom Goldstein, Gavin Taylor, Kawika Barabin, Kent Sayre
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1151-1158, 2016.

Abstract

Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve global sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. We analyze the convergence rates of the proposed schemes and demonstrate the efficiency of this approach by fitting linear classifiers and sparse linear models to large datasets using a distributed implementation with up to 20,000 cores in far less time than previous approaches.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-goldstein16, title = {Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction}, author = {Goldstein, Tom and Taylor, Gavin and Barabin, Kawika and Sayre, Kent}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {1151--1158}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/goldstein16.pdf}, url = {https://proceedings.mlr.press/v51/goldstein16.html}, abstract = {Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve global sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. We analyze the convergence rates of the proposed schemes and demonstrate the efficiency of this approach by fitting linear classifiers and sparse linear models to large datasets using a distributed implementation with up to 20,000 cores in far less time than previous approaches.} }
Endnote
%0 Conference Paper %T Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction %A Tom Goldstein %A Gavin Taylor %A Kawika Barabin %A Kent Sayre %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-goldstein16 %I PMLR %P 1151--1158 %U https://proceedings.mlr.press/v51/goldstein16.html %V 51 %X Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve global sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. We analyze the convergence rates of the proposed schemes and demonstrate the efficiency of this approach by fitting linear classifiers and sparse linear models to large datasets using a distributed implementation with up to 20,000 cores in far less time than previous approaches.
RIS
TY - CPAPER TI - Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction AU - Tom Goldstein AU - Gavin Taylor AU - Kawika Barabin AU - Kent Sayre BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-goldstein16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 1151 EP - 1158 L1 - http://proceedings.mlr.press/v51/goldstein16.pdf UR - https://proceedings.mlr.press/v51/goldstein16.html AB - Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data. We propose iterative methods that solve global sub-problems over an entire distributed dataset. This is possible using transpose reduction strategies that allow a single node to solve least-squares over massive datasets without putting all the data in one place. This results in simple iterative methods that avoid the expensive inner loops required for consensus methods. We analyze the convergence rates of the proposed schemes and demonstrate the efficiency of this approach by fitting linear classifiers and sparse linear models to large datasets using a distributed implementation with up to 20,000 cores in far less time than previous approaches. ER -
APA
Goldstein, T., Taylor, G., Barabin, K. & Sayre, K.. (2016). Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:1151-1158 Available from https://proceedings.mlr.press/v51/goldstein16.html.

Related Material