Conditional Accelerated Lazy Stochastic Gradient Descent

Guanghui Lan, Sebastian Pokutta, Yi Zhou, Daniel Zink
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1965-1974, 2017.

Abstract

In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O(1/\epsilon^2)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate $O(1/\epsilon^4)$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-lan17a, title = {Conditional Accelerated Lazy Stochastic Gradient Descent}, author = {Guanghui Lan and Sebastian Pokutta and Yi Zhou and Daniel Zink}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1965--1974}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/lan17a/lan17a.pdf}, url = {https://proceedings.mlr.press/v70/lan17a.html}, abstract = {In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O(1/\epsilon^2)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate $O(1/\epsilon^4)$.} }
Endnote
%0 Conference Paper %T Conditional Accelerated Lazy Stochastic Gradient Descent %A Guanghui Lan %A Sebastian Pokutta %A Yi Zhou %A Daniel Zink %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-lan17a %I PMLR %P 1965--1974 %U https://proceedings.mlr.press/v70/lan17a.html %V 70 %X In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O(1/\epsilon^2)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate $O(1/\epsilon^4)$.
APA
Lan, G., Pokutta, S., Zhou, Y. & Zink, D.. (2017). Conditional Accelerated Lazy Stochastic Gradient Descent. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1965-1974 Available from https://proceedings.mlr.press/v70/lan17a.html.

Related Material