One Sample Stochastic FrankWolfe
[edit]
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:40124023, 2020.
Abstract
One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its widespread use in many machine learning applications. However, once we replace the projection operator with a simpler linear program, as is done in the FrankWolfe method, both simplicity and stability take a serious hit. The aim of this paper is to bring them back without sacrificing the efficiency. In this paper, we propose the first onesample stochastic FrankWolfe algorithm, called 1SFW, that avoids the need to carefully tune the batch size, step size, learning rate, and other complicated hyper parameters. In particular, 1SFW achieves the optimal convergence rate of $\mathcal{O}(1/\epsilon^2)$ for reaching an $\epsilon$suboptimal solution in the stochastic convex setting, and a $(11/e)\epsilon$ approximate solution for a stochastic monotone DRsubmodular maximization problem. Moreover, in a general nonconvex setting, 1SFW finds an $\epsilon$firstorder stationary point after at most $\mathcal{O}(1/\epsilon^3)$ iterations, achieving the current best known convergence rate. All of this is possible by designing a novel unbiased momentum estimator that governs the stability of the optimization process while using a single sample at each iteration.
Related Material


