ASAGA: Asynchronous Parallel SAGA

Rémi Leblond, Fabian Pedregosa, Simon Lacoste-Julien
; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:46-54, 2017.

Abstract

We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-leblond17a, title = {{ASAGA: Asynchronous Parallel SAGA}}, author = {Rémi Leblond and Fabian Pedregosa and Simon Lacoste-Julien}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {46--54}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/leblond17a/leblond17a.pdf}, url = {http://proceedings.mlr.press/v54/leblond17a.html}, abstract = {We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.} }
Endnote
%0 Conference Paper %T ASAGA: Asynchronous Parallel SAGA %A Rémi Leblond %A Fabian Pedregosa %A Simon Lacoste-Julien %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-leblond17a %I PMLR %J Proceedings of Machine Learning Research %P 46--54 %U http://proceedings.mlr.press %V 54 %W PMLR %X We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced “perturbed iterate” framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead.
APA
Leblond, R., Pedregosa, F. & Lacoste-Julien, S.. (2017). ASAGA: Asynchronous Parallel SAGA. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:46-54

Related Material