Tight analyses for nonsmooth stochastic gradient descent
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:15791613, 2019.
Abstract
Consider the problem of minimizing functions that are Lipschitz and strongly convex, but not necessarily differentiable. We prove that after $T$ steps of stochastic gradient descent, the error of the final iterate is $O(\log(T)/T)$ \emph{with high probability}. We also construct a function from this class for which the error of the final iterate of \emph{deterministic} gradient descent is $\Omega(\log(T)/T)$. This shows that the upper bound is tight and that, in this setting, the last iterate of stochastic gradient descent has the same general error rate (with high probability) as deterministic gradient descent. This resolves both open questions posed by Shamir (2012). An intermediate step of our analysis proves that the suffix averaging method achieves error $O(1/T)$ \emph{with high probability}, which is optimal (for any firstorder optimization method). This improves results of Rakhlin et al. (2012) and Hazan and Kale (2014), both of which achieved error $O(1/T)$, but only in expectation, and achieved a high probability error bound of $O(\log \log(T)/T)$, which is suboptimal.
Related Material


