ASAGA: Asynchronous Parallel SAGA

Rémi Leblond, Fabian Pedregosa, Simon Lacoste-Julien
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:46-54, 2017.

Abstract

We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-leblond17a, title = {{ASAGA: Asynchronous Parallel SAGA}}, author = {Leblond, Rémi and Pedregosa, Fabian and Lacoste-Julien, Simon}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {46--54}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/leblond17a/leblond17a.pdf}, url = {https://proceedings.mlr.press/v54/leblond17a.html}, abstract = {We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.} }
Endnote
%0 Conference Paper %T ASAGA: Asynchronous Parallel SAGA %A Rémi Leblond %A Fabian Pedregosa %A Simon Lacoste-Julien %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-leblond17a %I PMLR %P 46--54 %U https://proceedings.mlr.press/v54/leblond17a.html %V 54 %X We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.
APA
Leblond, R., Pedregosa, F. & Lacoste-Julien, S.. (2017). ASAGA: Asynchronous Parallel SAGA. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:46-54 Available from https://proceedings.mlr.press/v54/leblond17a.html.

Related Material