[edit]
An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2454-2462, 2015.
Abstract
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(\psi_\textmax^1.5/d_\textmin ⋅1/ε) proximal gradient computations and communications per node in total, where \psi_\textmax denotes the largest eigenvalue of the graph Laplacian, and d_\textmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.