Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

[edit]

Yuchen Zhang, Xiao Lin ;
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:353-361, 2015.

Abstract

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variable. We also develop an extension to non-smooth and non-strongly convex loss functions, and an extension with better convergence rate on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.

Related Material