Averaging Stochastic Gradient Descent on Riemannian Manifolds
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:650687, 2018.
Abstract
We consider the minimization of a function defined on a Riemannian manifold $\mathcal{M}$ accessible only through unbiased estimates of its gradients. We develop a geometric framework to transform a sequence of slowly converging iterates generated from stochastic gradient descent (SGD) on $\mathcal{M}$ to an averaged iterate sequence with a robust and fast $O(1/n)$ convergence rate. We then present an application of our framework to geodesicallystronglyconvex (and possibly Euclidean nonconvex) problems. Finally, we demonstrate how these ideas apply to the case of streaming $k$PCA, where we show how to accelerate the slow rate of the randomized power method (without requiring knowledge of the eigengap) into a robust algorithm achieving the optimal rate of convergence.
Related Material


