Alternating Minimizations Converge to Second-Order Optimal Solutions

Qiuwei Li, Zhihui Zhu, Gongguo Tang
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3935-3943, 2019.

Abstract

This work studies the second-order convergence for both standard alternating minimization and proximal alternating minimization. We show that under mild assumptions on the (nonconvex) objective function, both algorithms avoid strict saddles almost surely from random initialization. Together with known first-order convergence results, this implies both algorithms converge to a second-order stationary point. This solves an open problem for the second-order convergence of alternating minimization algorithms that have been widely used in practice to solve large-scale nonconvex problems due to their simple implementation, fast convergence, and superb empirical performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-li19n, title = {Alternating Minimizations Converge to Second-Order Optimal Solutions}, author = {Li, Qiuwei and Zhu, Zhihui and Tang, Gongguo}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3935--3943}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/li19n/li19n.pdf}, url = {https://proceedings.mlr.press/v97/li19n.html}, abstract = {This work studies the second-order convergence for both standard alternating minimization and proximal alternating minimization. We show that under mild assumptions on the (nonconvex) objective function, both algorithms avoid strict saddles almost surely from random initialization. Together with known first-order convergence results, this implies both algorithms converge to a second-order stationary point. This solves an open problem for the second-order convergence of alternating minimization algorithms that have been widely used in practice to solve large-scale nonconvex problems due to their simple implementation, fast convergence, and superb empirical performance.} }
Endnote
%0 Conference Paper %T Alternating Minimizations Converge to Second-Order Optimal Solutions %A Qiuwei Li %A Zhihui Zhu %A Gongguo Tang %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-li19n %I PMLR %P 3935--3943 %U https://proceedings.mlr.press/v97/li19n.html %V 97 %X This work studies the second-order convergence for both standard alternating minimization and proximal alternating minimization. We show that under mild assumptions on the (nonconvex) objective function, both algorithms avoid strict saddles almost surely from random initialization. Together with known first-order convergence results, this implies both algorithms converge to a second-order stationary point. This solves an open problem for the second-order convergence of alternating minimization algorithms that have been widely used in practice to solve large-scale nonconvex problems due to their simple implementation, fast convergence, and superb empirical performance.
APA
Li, Q., Zhu, Z. & Tang, G.. (2019). Alternating Minimizations Converge to Second-Order Optimal Solutions. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3935-3943 Available from https://proceedings.mlr.press/v97/li19n.html.

Related Material