Efficient Distributed Learning with Sparsity

Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3636-3645, 2017.

Abstract

We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted $\ell_1$ regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-wang17f, title = {Efficient Distributed Learning with Sparsity}, author = {Jialei Wang and Mladen Kolar and Nathan Srebro and Tong Zhang}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3636--3645}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/wang17f/wang17f.pdf}, url = {https://proceedings.mlr.press/v70/wang17f.html}, abstract = {We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted $\ell_1$ regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.} }
Endnote
%0 Conference Paper %T Efficient Distributed Learning with Sparsity %A Jialei Wang %A Mladen Kolar %A Nathan Srebro %A Tong Zhang %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-wang17f %I PMLR %P 3636--3645 %U https://proceedings.mlr.press/v70/wang17f.html %V 70 %X We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted $\ell_1$ regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.
APA
Wang, J., Kolar, M., Srebro, N. & Zhang, T.. (2017). Efficient Distributed Learning with Sparsity. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:3636-3645 Available from https://proceedings.mlr.press/v70/wang17f.html.

Related Material