Alternating Minimizations Converge to SecondOrder Optimal Solutions
[edit]
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:39353943, 2019.
Abstract
This work studies the secondorder convergence for both standard alternating minimization and proximal alternating minimization. We show that under mild assumptions on the (nonconvex) objective function, both algorithms avoid strict saddles almost surely from random initialization. Together with known firstorder convergence results, this implies both algorithms converge to a secondorder stationary point. This solves an open problem for the secondorder convergence of alternating minimization algorithms that have been widely used in practice to solve largescale nonconvex problems due to their simple implementation, fast convergence, and superb empirical performance.
Related Material


