An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization

Necdet Aybat, Zi Wang, Garud Iyengar
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2454-2462, 2015.

Abstract

We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(\psi_\textmax^1.5/d_\textmin ⋅1/ε) proximal gradient computations and communications per node in total, where \psi_\textmax denotes the largest eigenvalue of the graph Laplacian, and d_\textmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-aybat15, title = {An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization}, author = {Aybat, Necdet and Wang, Zi and Iyengar, Garud}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {2454--2462}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/aybat15.pdf}, url = {https://proceedings.mlr.press/v37/aybat15.html}, abstract = {We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(\psi_\textmax^1.5/d_\textmin ⋅1/ε) proximal gradient computations and communications per node in total, where \psi_\textmax denotes the largest eigenvalue of the graph Laplacian, and d_\textmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.} }
Endnote
%0 Conference Paper %T An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization %A Necdet Aybat %A Zi Wang %A Garud Iyengar %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-aybat15 %I PMLR %P 2454--2462 %U https://proceedings.mlr.press/v37/aybat15.html %V 37 %X We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(\psi_\textmax^1.5/d_\textmin ⋅1/ε) proximal gradient computations and communications per node in total, where \psi_\textmax denotes the largest eigenvalue of the graph Laplacian, and d_\textmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.
RIS
TY - CPAPER TI - An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization AU - Necdet Aybat AU - Zi Wang AU - Garud Iyengar BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-aybat15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 2454 EP - 2462 L1 - http://proceedings.mlr.press/v37/aybat15.pdf UR - https://proceedings.mlr.press/v37/aybat15.html AB - We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(\psi_\textmax^1.5/d_\textmin ⋅1/ε) proximal gradient computations and communications per node in total, where \psi_\textmax denotes the largest eigenvalue of the graph Laplacian, and d_\textmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems. ER -
APA
Aybat, N., Wang, Z. & Iyengar, G.. (2015). An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:2454-2462 Available from https://proceedings.mlr.press/v37/aybat15.html.

Related Material