Catalyst for Gradientbased Nonconvex Optimization
[edit]
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics, PMLR 84:613622, 2018.
Abstract
We introduce a generic scheme to solve nonconvex optimization problems using gradientbased algorithms originally designed for minimizing convex functions. Even though these methods may originally require convexity to operate, the proposed approach allows one to use them without assuming any knowledge about the convexity of the objective. In general, the scheme is guaranteed to produce a stationary point with a worstcase efficiency typical of firstorder methods, and when the objective turns out to be convex, it automatically accelerates in the sense of Nesterov and achieves nearoptimal convergence rate in function values. We conclude the paper by showing promising experimental results obtained by applying our approach to incremental algorithms such as SVRG and SAGA for sparse matrix factorization and for learning neural networks.
Related Material


