[edit]
On Graduated Optimization for Stochastic Non-Convex Problems
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1833-1841, 2016.
Abstract
The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade.Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ε-approximate solution within O(1 / ε^2) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of “zero-order optimization", and devise a variant of our algorithm which converges at rate of O(d^2/ ε^4).