Exponential Convergence of Testing Error for Stochastic Gradient Methods
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:250296, 2018.
Abstract
We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing \emph{loss} (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing \emph{error} (classification error) converges exponentially fast if lownoise conditions are assumed. To achieve these rates of convergence we show sharper highprobability bounds with respect to the number of observations for stochastic gradient descent.
Related Material


