Adaptive Consensus ADMM for Distributed Optimization

Zheng Xu, Gavin Taylor, Hao Li, Mário A. T. Figueiredo, Xiaoming Yuan, Tom Goldstein
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3841-3850, 2017.

Abstract

The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters. We study distributed ADMM methods that boost performance by using different fine-tuned algorithm parameters on each worker node. We present a O(1/k) convergence rate for adaptive ADMM methods with node-specific parameters, and propose adaptive consensus ADMM (ACADMM), which automatically tunes parameters without user oversight.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-xu17c, title = {Adaptive Consensus {ADMM} for Distributed Optimization}, author = {Zheng Xu and Gavin Taylor and Hao Li and M{\'a}rio A. T. Figueiredo and Xiaoming Yuan and Tom Goldstein}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3841--3850}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/xu17c/xu17c.pdf}, url = {https://proceedings.mlr.press/v70/xu17c.html}, abstract = {The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters. We study distributed ADMM methods that boost performance by using different fine-tuned algorithm parameters on each worker node. We present a O(1/k) convergence rate for adaptive ADMM methods with node-specific parameters, and propose adaptive consensus ADMM (ACADMM), which automatically tunes parameters without user oversight.} }
Endnote
%0 Conference Paper %T Adaptive Consensus ADMM for Distributed Optimization %A Zheng Xu %A Gavin Taylor %A Hao Li %A Mário A. T. Figueiredo %A Xiaoming Yuan %A Tom Goldstein %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-xu17c %I PMLR %P 3841--3850 %U https://proceedings.mlr.press/v70/xu17c.html %V 70 %X The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters. We study distributed ADMM methods that boost performance by using different fine-tuned algorithm parameters on each worker node. We present a O(1/k) convergence rate for adaptive ADMM methods with node-specific parameters, and propose adaptive consensus ADMM (ACADMM), which automatically tunes parameters without user oversight.
APA
Xu, Z., Taylor, G., Li, H., Figueiredo, M.A.T., Yuan, X. & Goldstein, T.. (2017). Adaptive Consensus ADMM for Distributed Optimization. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:3841-3850 Available from https://proceedings.mlr.press/v70/xu17c.html.

Related Material