Stochastic Composite LeastSquares Regression with Convergence Rate $O(1/n)$
[edit]
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:831875, 2017.
Abstract
We consider the minimization of composite objective functions composed of the expectation of quadratic functions and an arbitrary convex function. We study the stochastic dual averaging algorithm with a constant stepsize, showing that it leads to a convergence rate of O(1/n) without strong convexity assumptions. This thus extends earlier results on leastsquares regression with the Euclidean geometry to (a) all convex regularizers and constraints, and (b) all geometries represented by a Bregman divergence. This is achieved by a new proof technique that relates stochastic and deterministic recursions
Related Material


