Guanghui Lan,
Sebastian Pokutta,
Yi Zhou,
Daniel Zink
;
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1965-1974, 2017.
Abstract
In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O(1/\epsilon^2)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate $O(1/\epsilon^4)$.
@InProceedings{pmlr-v70-lan17a,
title = {Conditional Accelerated Lazy Stochastic Gradient Descent},
author = {Guanghui Lan and Sebastian Pokutta and Yi Zhou and Daniel Zink},
booktitle = {Proceedings of the 34th International Conference on Machine Learning},
pages = {1965--1974},
year = {2017},
editor = {Doina Precup and Yee Whye Teh},
volume = {70},
series = {Proceedings of Machine Learning Research},
address = {International Convention Centre, Sydney, Australia},
month = {06--11 Aug},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v70/lan17a/lan17a.pdf},
url = {http://proceedings.mlr.press/v70/lan17a.html},
abstract = {In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O(1/\epsilon^2)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate $O(1/\epsilon^4)$.}
}
%0 Conference Paper
%T Conditional Accelerated Lazy Stochastic Gradient Descent
%A Guanghui Lan
%A Sebastian Pokutta
%A Yi Zhou
%A Daniel Zink
%B Proceedings of the 34th International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2017
%E Doina Precup
%E Yee Whye Teh
%F pmlr-v70-lan17a
%I PMLR
%J Proceedings of Machine Learning Research
%P 1965--1974
%U http://proceedings.mlr.press
%V 70
%W PMLR
%X In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O(1/\epsilon^2)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of (Hazan and Kale, 2012) with convergence rate $O(1/\epsilon^4)$.
Lan, G., Pokutta, S., Zhou, Y. & Zink, D.. (2017). Conditional Accelerated Lazy Stochastic Gradient Descent. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:1965-1974
This site last compiled Wed, 24 Oct 2018 16:09:05 +0000