An Asynchronous Parallel Stochastic Coordinate Descent Algorithm

Ji Liu, Steve Wright, Christopher Re, Victor Bittorf, Srikrishna Sridhar
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):469-477, 2014.

Abstract

We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n^1/2) in unconstrained optimization and O(n^1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-liud14, title = {An Asynchronous Parallel Stochastic Coordinate Descent Algorithm}, author = {Liu, Ji and Wright, Steve and Re, Christopher and Bittorf, Victor and Sridhar, Srikrishna}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {469--477}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/liud14.pdf}, url = {https://proceedings.mlr.press/v32/liud14.html}, abstract = {We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n^1/2) in unconstrained optimization and O(n^1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.} }
Endnote
%0 Conference Paper %T An Asynchronous Parallel Stochastic Coordinate Descent Algorithm %A Ji Liu %A Steve Wright %A Christopher Re %A Victor Bittorf %A Srikrishna Sridhar %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-liud14 %I PMLR %P 469--477 %U https://proceedings.mlr.press/v32/liud14.html %V 32 %N 2 %X We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n^1/2) in unconstrained optimization and O(n^1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.
RIS
TY - CPAPER TI - An Asynchronous Parallel Stochastic Coordinate Descent Algorithm AU - Ji Liu AU - Steve Wright AU - Christopher Re AU - Victor Bittorf AU - Srikrishna Sridhar BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-liud14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 469 EP - 477 L1 - http://proceedings.mlr.press/v32/liud14.pdf UR - https://proceedings.mlr.press/v32/liud14.html AB - We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n^1/2) in unconstrained optimization and O(n^1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors. ER -
APA
Liu, J., Wright, S., Re, C., Bittorf, V. & Sridhar, S.. (2014). An Asynchronous Parallel Stochastic Coordinate Descent Algorithm. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):469-477 Available from https://proceedings.mlr.press/v32/liud14.html.

Related Material