DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate

Saeed Soori, Konstantin Mishchenko, Aryan Mokhtari, Maryam Mehri Dehnavi, Mert Gurbuzbalaban
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1965-1976, 2020.

Abstract

In this paper, we consider distributed algorithms for solving the empirical risk minimization problem under the master/worker communication model. We develop a distributed asynchronous quasi-Newton algorithm that can achieve superlinear convergence. To our knowledge, this is the first distributed asynchronous algorithm with superlinear convergence guarantees. Our algorithm is communication-efficient in the sense that at every iteration the master node and workers communicate vectors of size $O(p)$, where $p$ is the dimension of the decision variable. The proposed method is based on a distributed asynchronous averaging scheme of decision vectors and gradients in a way to effectively capture the local Hessian information of the objective function. Our convergence theory supports asynchronous computations subject to both bounded delays and unbounded delays with a bounded time-average. Unlike in the majority of asynchronous optimization literature, we do not require choosing smaller stepsize when delays are huge. We provide numerical experiments that match our theoretical results and showcase significant improvement comparing to state-of-the-art distributed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-soori20a, title = {DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate}, author = {Soori, Saeed and Mishchenko, Konstantin and Mokhtari, Aryan and Dehnavi, Maryam Mehri and Gurbuzbalaban, Mert}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1965--1976}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/soori20a/soori20a.pdf}, url = {https://proceedings.mlr.press/v108/soori20a.html}, abstract = {In this paper, we consider distributed algorithms for solving the empirical risk minimization problem under the master/worker communication model. We develop a distributed asynchronous quasi-Newton algorithm that can achieve superlinear convergence. To our knowledge, this is the first distributed asynchronous algorithm with superlinear convergence guarantees. Our algorithm is communication-efficient in the sense that at every iteration the master node and workers communicate vectors of size $O(p)$, where $p$ is the dimension of the decision variable. The proposed method is based on a distributed asynchronous averaging scheme of decision vectors and gradients in a way to effectively capture the local Hessian information of the objective function. Our convergence theory supports asynchronous computations subject to both bounded delays and unbounded delays with a bounded time-average. Unlike in the majority of asynchronous optimization literature, we do not require choosing smaller stepsize when delays are huge. We provide numerical experiments that match our theoretical results and showcase significant improvement comparing to state-of-the-art distributed algorithms.} }
Endnote
%0 Conference Paper %T DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate %A Saeed Soori %A Konstantin Mishchenko %A Aryan Mokhtari %A Maryam Mehri Dehnavi %A Mert Gurbuzbalaban %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-soori20a %I PMLR %P 1965--1976 %U https://proceedings.mlr.press/v108/soori20a.html %V 108 %X In this paper, we consider distributed algorithms for solving the empirical risk minimization problem under the master/worker communication model. We develop a distributed asynchronous quasi-Newton algorithm that can achieve superlinear convergence. To our knowledge, this is the first distributed asynchronous algorithm with superlinear convergence guarantees. Our algorithm is communication-efficient in the sense that at every iteration the master node and workers communicate vectors of size $O(p)$, where $p$ is the dimension of the decision variable. The proposed method is based on a distributed asynchronous averaging scheme of decision vectors and gradients in a way to effectively capture the local Hessian information of the objective function. Our convergence theory supports asynchronous computations subject to both bounded delays and unbounded delays with a bounded time-average. Unlike in the majority of asynchronous optimization literature, we do not require choosing smaller stepsize when delays are huge. We provide numerical experiments that match our theoretical results and showcase significant improvement comparing to state-of-the-art distributed algorithms.
APA
Soori, S., Mishchenko, K., Mokhtari, A., Dehnavi, M.M. & Gurbuzbalaban, M.. (2020). DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1965-1976 Available from https://proceedings.mlr.press/v108/soori20a.html.

Related Material