Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Zeyuan Allen-Zhu, Yang Yuan
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1080-1089, 2016.

Abstract

Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: being originally designed for strongly convex objectives, it is also very robust in non-strongly convex or sum-of-non-convex settings. More precisely, we provide new analysis to improve the state-of-the-art running times in both settings by either applying SVRG or its novel variant. Since non-strongly convex objectives include important examples such as Lasso or logistic regression, and sum-of-non-convex objectives include famous examples such as stochastic PCA and is even believed to be related to training deep neural nets, our results also imply better performances in these applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-allen-zhub16, title = {Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives}, author = {Allen-Zhu, Zeyuan and Yuan, Yang}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1080--1089}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/allen-zhub16.pdf}, url = {https://proceedings.mlr.press/v48/allen-zhub16.html}, abstract = {Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: being originally designed for strongly convex objectives, it is also very robust in non-strongly convex or sum-of-non-convex settings. More precisely, we provide new analysis to improve the state-of-the-art running times in both settings by either applying SVRG or its novel variant. Since non-strongly convex objectives include important examples such as Lasso or logistic regression, and sum-of-non-convex objectives include famous examples such as stochastic PCA and is even believed to be related to training deep neural nets, our results also imply better performances in these applications.} }
Endnote
%0 Conference Paper %T Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives %A Zeyuan Allen-Zhu %A Yang Yuan %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-allen-zhub16 %I PMLR %P 1080--1089 %U https://proceedings.mlr.press/v48/allen-zhub16.html %V 48 %X Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: being originally designed for strongly convex objectives, it is also very robust in non-strongly convex or sum-of-non-convex settings. More precisely, we provide new analysis to improve the state-of-the-art running times in both settings by either applying SVRG or its novel variant. Since non-strongly convex objectives include important examples such as Lasso or logistic regression, and sum-of-non-convex objectives include famous examples such as stochastic PCA and is even believed to be related to training deep neural nets, our results also imply better performances in these applications.
RIS
TY - CPAPER TI - Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives AU - Zeyuan Allen-Zhu AU - Yang Yuan BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-allen-zhub16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1080 EP - 1089 L1 - http://proceedings.mlr.press/v48/allen-zhub16.pdf UR - https://proceedings.mlr.press/v48/allen-zhub16.html AB - Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: being originally designed for strongly convex objectives, it is also very robust in non-strongly convex or sum-of-non-convex settings. More precisely, we provide new analysis to improve the state-of-the-art running times in both settings by either applying SVRG or its novel variant. Since non-strongly convex objectives include important examples such as Lasso or logistic regression, and sum-of-non-convex objectives include famous examples such as stochastic PCA and is even believed to be related to training deep neural nets, our results also imply better performances in these applications. ER -
APA
Allen-Zhu, Z. & Yuan, Y.. (2016). Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1080-1089 Available from https://proceedings.mlr.press/v48/allen-zhub16.html.

Related Material