Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

Yuchen Zhang, Xiao Lin
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:353-361, 2015.

Abstract

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variable. We also develop an extension to non-smooth and non-strongly convex loss functions, and an extension with better convergence rate on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-zhanga15, title = {Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization}, author = {Zhang, Yuchen and Lin, Xiao}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {353--361}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/zhanga15.pdf}, url = {https://proceedings.mlr.press/v37/zhanga15.html}, abstract = {We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variable. We also develop an extension to non-smooth and non-strongly convex loss functions, and an extension with better convergence rate on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.} }
Endnote
%0 Conference Paper %T Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization %A Yuchen Zhang %A Xiao Lin %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-zhanga15 %I PMLR %P 353--361 %U https://proceedings.mlr.press/v37/zhanga15.html %V 37 %X We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variable. We also develop an extension to non-smooth and non-strongly convex loss functions, and an extension with better convergence rate on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.
RIS
TY - CPAPER TI - Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization AU - Yuchen Zhang AU - Xiao Lin BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-zhanga15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 353 EP - 361 L1 - http://proceedings.mlr.press/v37/zhanga15.pdf UR - https://proceedings.mlr.press/v37/zhanga15.html AB - We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate method, which alternates between maximizing over one (or more) randomly chosen dual variable and minimizing over the primal variable. We also develop an extension to non-smooth and non-strongly convex loss functions, and an extension with better convergence rate on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods. ER -
APA
Zhang, Y. & Lin, X.. (2015). Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:353-361 Available from https://proceedings.mlr.press/v37/zhanga15.html.

Related Material