An Asynchronous Parallel Stochastic Coordinate Descent Algorithm

[edit]

Ji Liu, Steve Wright, Christopher Re, Victor Bittorf, Srikrishna Sridhar ;
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):469-477, 2014.

Abstract

We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n^1/2) in unconstrained optimization and O(n^1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.

Related Material