A General Analysis of the Convergence of ADMM

Robert Nishihara, Laurent Lessard, Ben Recht, Andrew Packard, Michael Jordan
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:343-352, 2015.

Abstract

We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-nishihara15, title = {A General Analysis of the Convergence of ADMM}, author = {Nishihara, Robert and Lessard, Laurent and Recht, Ben and Packard, Andrew and Jordan, Michael}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {343--352}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/nishihara15.pdf}, url = {https://proceedings.mlr.press/v37/nishihara15.html}, abstract = {We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.} }
Endnote
%0 Conference Paper %T A General Analysis of the Convergence of ADMM %A Robert Nishihara %A Laurent Lessard %A Ben Recht %A Andrew Packard %A Michael Jordan %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-nishihara15 %I PMLR %P 343--352 %U https://proceedings.mlr.press/v37/nishihara15.html %V 37 %X We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.
RIS
TY - CPAPER TI - A General Analysis of the Convergence of ADMM AU - Robert Nishihara AU - Laurent Lessard AU - Ben Recht AU - Andrew Packard AU - Michael Jordan BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-nishihara15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 343 EP - 352 L1 - http://proceedings.mlr.press/v37/nishihara15.pdf UR - https://proceedings.mlr.press/v37/nishihara15.html AB - We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. (2014), reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence. ER -
APA
Nishihara, R., Lessard, L., Recht, B., Packard, A. & Jordan, M.. (2015). A General Analysis of the Convergence of ADMM. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:343-352 Available from https://proceedings.mlr.press/v37/nishihara15.html.

Related Material