Exponential Convergence of Testing Error for Stochastic Gradient Methods

Loucas Pillaud-Vivien, Alessandro Rudi, Francis Bach
Proceedings of the 31st Conference On Learning Theory, PMLR 75:250-296, 2018.

Abstract

We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing \emph{loss} (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing \emph{error} (classification error) converges exponentially fast if low-noise conditions are assumed. To achieve these rates of convergence we show sharper high-probability bounds with respect to the number of observations for stochastic gradient descent.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-pillaud-vivien18a, title = {Exponential Convergence of Testing Error for Stochastic Gradient Methods}, author = {Pillaud-Vivien, Loucas and Rudi, Alessandro and Bach, Francis}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {250--296}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/pillaud-vivien18a/pillaud-vivien18a.pdf}, url = {https://proceedings.mlr.press/v75/pillaud-vivien18a.html}, abstract = {We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing \emph{loss} (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing \emph{error} (classification error) converges exponentially fast if low-noise conditions are assumed. To achieve these rates of convergence we show sharper high-probability bounds with respect to the number of observations for stochastic gradient descent.} }
Endnote
%0 Conference Paper %T Exponential Convergence of Testing Error for Stochastic Gradient Methods %A Loucas Pillaud-Vivien %A Alessandro Rudi %A Francis Bach %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-pillaud-vivien18a %I PMLR %P 250--296 %U https://proceedings.mlr.press/v75/pillaud-vivien18a.html %V 75 %X We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing \emph{loss} (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing \emph{error} (classification error) converges exponentially fast if low-noise conditions are assumed. To achieve these rates of convergence we show sharper high-probability bounds with respect to the number of observations for stochastic gradient descent.
APA
Pillaud-Vivien, L., Rudi, A. & Bach, F.. (2018). Exponential Convergence of Testing Error for Stochastic Gradient Methods. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:250-296 Available from https://proceedings.mlr.press/v75/pillaud-vivien18a.html.

Related Material