SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2613-2621, 2017.

Abstract

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-nguyen17b, title = {{SARAH}: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient}, author = {Lam M. Nguyen and Jie Liu and Katya Scheinberg and Martin Tak{\'a}{\v{c}}}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2613--2621}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/nguyen17b/nguyen17b.pdf}, url = {https://proceedings.mlr.press/v70/nguyen17b.html}, abstract = {In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.} }
Endnote
%0 Conference Paper %T SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient %A Lam M. Nguyen %A Jie Liu %A Katya Scheinberg %A Martin Takáč %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-nguyen17b %I PMLR %P 2613--2621 %U https://proceedings.mlr.press/v70/nguyen17b.html %V 70 %X In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.
APA
Nguyen, L.M., Liu, J., Scheinberg, K. & Takáč, M.. (2017). SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2613-2621 Available from https://proceedings.mlr.press/v70/nguyen17b.html.

Related Material