Linear Convergence of Stochastic Frank Wolfe Variants

Donald Goldfarb, Garud Iyengar, Chaoxu Zhou
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:1066-1074, 2017.

Abstract

In this paper, we show that the Away-step Stochastic Frank-Wolfe (ASFW) and Pairwise Stochastic Frank-Wolfe (PSFW) algorithms converge linearly in expectation. We also show that if an algorithm convergences linearly in expectation then it converges linearly almost surely. In order to prove these results, we develop a novel proof technique based on concepts of empirical processes and concentration inequalities. As far as we know, this technique has not been used previously to derive the convergence rates of stochastic optimization algorithms. In large- scale numerical experiments, ASFW and PSFW perform as well as or better than their stochastic competitors in actual CPU time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-goldfarb17a, title = {{Linear Convergence of Stochastic Frank Wolfe Variants}}, author = {Goldfarb, Donald and Iyengar, Garud and Zhou, Chaoxu}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {1066--1074}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/goldfarb17a/goldfarb17a.pdf}, url = {https://proceedings.mlr.press/v54/goldfarb17a.html}, abstract = {In this paper, we show that the Away-step Stochastic Frank-Wolfe (ASFW) and Pairwise Stochastic Frank-Wolfe (PSFW) algorithms converge linearly in expectation. We also show that if an algorithm convergences linearly in expectation then it converges linearly almost surely. In order to prove these results, we develop a novel proof technique based on concepts of empirical processes and concentration inequalities. As far as we know, this technique has not been used previously to derive the convergence rates of stochastic optimization algorithms. In large- scale numerical experiments, ASFW and PSFW perform as well as or better than their stochastic competitors in actual CPU time.} }
Endnote
%0 Conference Paper %T Linear Convergence of Stochastic Frank Wolfe Variants %A Donald Goldfarb %A Garud Iyengar %A Chaoxu Zhou %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-goldfarb17a %I PMLR %P 1066--1074 %U https://proceedings.mlr.press/v54/goldfarb17a.html %V 54 %X In this paper, we show that the Away-step Stochastic Frank-Wolfe (ASFW) and Pairwise Stochastic Frank-Wolfe (PSFW) algorithms converge linearly in expectation. We also show that if an algorithm convergences linearly in expectation then it converges linearly almost surely. In order to prove these results, we develop a novel proof technique based on concepts of empirical processes and concentration inequalities. As far as we know, this technique has not been used previously to derive the convergence rates of stochastic optimization algorithms. In large- scale numerical experiments, ASFW and PSFW perform as well as or better than their stochastic competitors in actual CPU time.
APA
Goldfarb, D., Iyengar, G. & Zhou, C.. (2017). Linear Convergence of Stochastic Frank Wolfe Variants. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:1066-1074 Available from https://proceedings.mlr.press/v54/goldfarb17a.html.

Related Material