Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization

Qunwei Li, Yi Zhou, Yingbin Liang, Pramod K. Varshney
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2111-2119, 2017.

Abstract

In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation step, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-Lojasiewicz (KL) property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the KL property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-li17g, title = {Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization}, author = {Qunwei Li and Yi Zhou and Yingbin Liang and Pramod K. Varshney}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2111--2119}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/li17g/li17g.pdf}, url = {https://proceedings.mlr.press/v70/li17g.html}, abstract = {In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation step, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-Lojasiewicz (KL) property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the KL property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.} }
Endnote
%0 Conference Paper %T Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization %A Qunwei Li %A Yi Zhou %A Yingbin Liang %A Pramod K. Varshney %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-li17g %I PMLR %P 2111--2119 %U https://proceedings.mlr.press/v70/li17g.html %V 70 %X In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation step, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-Lojasiewicz (KL) property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the KL property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.
APA
Li, Q., Zhou, Y., Liang, Y. & Varshney, P.K.. (2017). Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2111-2119 Available from https://proceedings.mlr.press/v70/li17g.html.

Related Material