Towards an optimal stochastic alternating direction method of multipliers

Samaneh Azadi, Suvrit Sra
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(1):620-628, 2014.

Abstract

We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM methods: (i) the first attains the minimax optimal rate of O(1/k) for nonsmooth strongly-convex stochastic problems; while (ii) the second progresses towards an optimal rate by exhibiting an O(1/k^2) rate for the smooth part. We present several experiments with our new methods; the results indicate improved performance over competing ADMM methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-azadi14, title = {Towards an optimal stochastic alternating direction method of multipliers}, author = {Azadi, Samaneh and Sra, Suvrit}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {620--628}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/azadi14.pdf}, url = {https://proceedings.mlr.press/v32/azadi14.html}, abstract = {We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM methods: (i) the first attains the minimax optimal rate of O(1/k) for nonsmooth strongly-convex stochastic problems; while (ii) the second progresses towards an optimal rate by exhibiting an O(1/k^2) rate for the smooth part. We present several experiments with our new methods; the results indicate improved performance over competing ADMM methods.} }
Endnote
%0 Conference Paper %T Towards an optimal stochastic alternating direction method of multipliers %A Samaneh Azadi %A Suvrit Sra %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-azadi14 %I PMLR %P 620--628 %U https://proceedings.mlr.press/v32/azadi14.html %V 32 %N 1 %X We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM methods: (i) the first attains the minimax optimal rate of O(1/k) for nonsmooth strongly-convex stochastic problems; while (ii) the second progresses towards an optimal rate by exhibiting an O(1/k^2) rate for the smooth part. We present several experiments with our new methods; the results indicate improved performance over competing ADMM methods.
RIS
TY - CPAPER TI - Towards an optimal stochastic alternating direction method of multipliers AU - Samaneh Azadi AU - Suvrit Sra BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-azadi14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 1 SP - 620 EP - 628 L1 - http://proceedings.mlr.press/v32/azadi14.pdf UR - https://proceedings.mlr.press/v32/azadi14.html AB - We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM methods: (i) the first attains the minimax optimal rate of O(1/k) for nonsmooth strongly-convex stochastic problems; while (ii) the second progresses towards an optimal rate by exhibiting an O(1/k^2) rate for the smooth part. We present several experiments with our new methods; the results indicate improved performance over competing ADMM methods. ER -
APA
Azadi, S. & Sra, S.. (2014). Towards an optimal stochastic alternating direction method of multipliers. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(1):620-628 Available from https://proceedings.mlr.press/v32/azadi14.html.

Related Material