Restarting FrankWolfe
[edit]
Proceedings of Machine Learning Research, PMLR 89:12751283, 2019.
Abstract
Conditional Gradients (aka FrankWolfe algorithms) form a classical set of methods for constrained smooth convex minimization due to their simplicity, the absence of projection step, and competitive numerical performance. While the vanilla FrankWolfe algorithm only ensures a worstcase rate of $O(1/\epsilon)$, various recent results have shown that for strongly convex functions, the method can be slightly modified to achieve linear convergence. However, this still leaves a huge gap between sublinear $O(1/\epsilon)$ convergence and linear $O(\log 1/\epsilon)$ convergence to reach an $\epsilon$approximate solution. Here, we present a new variant of Conditional Gradients, that can dynamically adapt to the function’s geometric properties using restarts and thus smoothly interpolates between the sublinear and linear regimes. Furthermore, our results apply to generic compact convex constraint sets.
Related Material


