Fast and Faster Convergence of SGD for OverParameterized Models and an Accelerated Perceptron
[edit]
Proceedings of Machine Learning Research, PMLR 89:11951204, 2019.
Abstract
Modern machine learning focuses on highly expressive models that are able to fit or interpolate the data completely, resulting in zero training loss. For such models, we show that the stochastic gradients of common loss functions satisfy a strong growth condition. Under this condition, we prove that constant stepsize stochastic gradient descent (SGD) with Nesterov acceleration matches the convergence rate of the deterministic accelerated method for both convex and stronglyconvex functions. We also show that this condition implies that SGD can find a firstorder stationary point as efficiently as full gradient descent in nonconvex settings. Under interpolation, we further show that all smooth loss functions with a finitesum structure satisfy a weaker growth condition. Given this weaker condition, we prove that SGD with a constant stepsize attains the deterministic convergence rate in both the stronglyconvex and convex settings. Under additional assumptions, the above results enable us to prove an $O(1/k^2)$ mistake bound for $k$ iterations of a stochastic perceptron algorithm using the squaredhinge loss. Finally, we validate our theoretical findings with experiments on synthetic and real datasets.
Related Material


