Communication-Efficient Distributed Optimization using an Approximate Newton-type Method

Ohad Shamir, Nati Srebro, Tong Zhang
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1000-1008, 2014.

Abstract

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emphimproves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-shamir14, title = {Communication-Efficient Distributed Optimization using an Approximate Newton-type Method}, author = {Shamir, Ohad and Srebro, Nati and Zhang, Tong}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1000--1008}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/shamir14.pdf}, url = {https://proceedings.mlr.press/v32/shamir14.html}, abstract = {We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emphimproves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.} }
Endnote
%0 Conference Paper %T Communication-Efficient Distributed Optimization using an Approximate Newton-type Method %A Ohad Shamir %A Nati Srebro %A Tong Zhang %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-shamir14 %I PMLR %P 1000--1008 %U https://proceedings.mlr.press/v32/shamir14.html %V 32 %N 2 %X We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emphimproves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.
RIS
TY - CPAPER TI - Communication-Efficient Distributed Optimization using an Approximate Newton-type Method AU - Ohad Shamir AU - Nati Srebro AU - Tong Zhang BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-shamir14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1000 EP - 1008 L1 - http://proceedings.mlr.press/v32/shamir14.pdf UR - https://proceedings.mlr.press/v32/shamir14.html AB - We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which provably \emphimproves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM. ER -
APA
Shamir, O., Srebro, N. & Zhang, T.. (2014). Communication-Efficient Distributed Optimization using an Approximate Newton-type Method. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1000-1008 Available from https://proceedings.mlr.press/v32/shamir14.html.

Related Material