Gradient PrimalDual Algorithm Converges to SecondOrder Stationary Solution for Nonconvex Distributed Optimization Over Networks
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:20092018, 2018.
Abstract
In this work, we study two firstorder primaldual based algorithms, the Gradient PrimalDual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained nonconvex optimization problems. We show that with random initialization of the primal and dual variables, both algorithms are able to compute secondorder stationary solutions (ss2) with probability one. This is the first result showing that primaldual algorithm is capable of finding ss2 when only using firstorder information; it also extends the existing results for firstorder, but {primalonly} algorithms. An important implication of our result is that it also gives rise to the first global convergence result to the ss2, for two classes of unconstrained distributed nonconvex learning problems over multiagent networks.
Related Material


