Adaptive Stochastic Alternating Direction Method of Multipliers

Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:69-77, 2015.

Abstract

The Alternating Direction Method of Multipliers (ADMM) has been studied for years. Traditional ADMM algorithms need to compute, at each iteration, an (empirical) expected loss function on all training examples, resulting in a computational complexity proportional to the number of training examples. To reduce the complexity, stochastic ADMM algorithms were proposed to replace the expected loss function with a random loss function associated with one uniformly drawn example plus a Bregman divergence term. The Bregman divergence, however, is derived from a simple 2nd-order proximal function, i.e., the half squared norm, which could be a suboptimal choice. In this paper, we present a new family of stochastic ADMM algorithms with optimal 2nd-order proximal functions, which produce a new family of adaptive stochastic ADMM methods. We theoretically prove that the regret bounds are as good as the bounds which could be achieved by the best proximal function that can be chosen in hindsight. Encouraging empirical results on a variety of real-world datasets confirm the effectiveness and efficiency of the proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-zhaob15, title = {Adaptive Stochastic Alternating Direction Method of Multipliers}, author = {Zhao, Peilin and Yang, Jinwei and Zhang, Tong and Li, Ping}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {69--77}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/zhaob15.pdf}, url = {https://proceedings.mlr.press/v37/zhaob15.html}, abstract = {The Alternating Direction Method of Multipliers (ADMM) has been studied for years. Traditional ADMM algorithms need to compute, at each iteration, an (empirical) expected loss function on all training examples, resulting in a computational complexity proportional to the number of training examples. To reduce the complexity, stochastic ADMM algorithms were proposed to replace the expected loss function with a random loss function associated with one uniformly drawn example plus a Bregman divergence term. The Bregman divergence, however, is derived from a simple 2nd-order proximal function, i.e., the half squared norm, which could be a suboptimal choice. In this paper, we present a new family of stochastic ADMM algorithms with optimal 2nd-order proximal functions, which produce a new family of adaptive stochastic ADMM methods. We theoretically prove that the regret bounds are as good as the bounds which could be achieved by the best proximal function that can be chosen in hindsight. Encouraging empirical results on a variety of real-world datasets confirm the effectiveness and efficiency of the proposed algorithms.} }
Endnote
%0 Conference Paper %T Adaptive Stochastic Alternating Direction Method of Multipliers %A Peilin Zhao %A Jinwei Yang %A Tong Zhang %A Ping Li %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-zhaob15 %I PMLR %P 69--77 %U https://proceedings.mlr.press/v37/zhaob15.html %V 37 %X The Alternating Direction Method of Multipliers (ADMM) has been studied for years. Traditional ADMM algorithms need to compute, at each iteration, an (empirical) expected loss function on all training examples, resulting in a computational complexity proportional to the number of training examples. To reduce the complexity, stochastic ADMM algorithms were proposed to replace the expected loss function with a random loss function associated with one uniformly drawn example plus a Bregman divergence term. The Bregman divergence, however, is derived from a simple 2nd-order proximal function, i.e., the half squared norm, which could be a suboptimal choice. In this paper, we present a new family of stochastic ADMM algorithms with optimal 2nd-order proximal functions, which produce a new family of adaptive stochastic ADMM methods. We theoretically prove that the regret bounds are as good as the bounds which could be achieved by the best proximal function that can be chosen in hindsight. Encouraging empirical results on a variety of real-world datasets confirm the effectiveness and efficiency of the proposed algorithms.
RIS
TY - CPAPER TI - Adaptive Stochastic Alternating Direction Method of Multipliers AU - Peilin Zhao AU - Jinwei Yang AU - Tong Zhang AU - Ping Li BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-zhaob15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 69 EP - 77 L1 - http://proceedings.mlr.press/v37/zhaob15.pdf UR - https://proceedings.mlr.press/v37/zhaob15.html AB - The Alternating Direction Method of Multipliers (ADMM) has been studied for years. Traditional ADMM algorithms need to compute, at each iteration, an (empirical) expected loss function on all training examples, resulting in a computational complexity proportional to the number of training examples. To reduce the complexity, stochastic ADMM algorithms were proposed to replace the expected loss function with a random loss function associated with one uniformly drawn example plus a Bregman divergence term. The Bregman divergence, however, is derived from a simple 2nd-order proximal function, i.e., the half squared norm, which could be a suboptimal choice. In this paper, we present a new family of stochastic ADMM algorithms with optimal 2nd-order proximal functions, which produce a new family of adaptive stochastic ADMM methods. We theoretically prove that the regret bounds are as good as the bounds which could be achieved by the best proximal function that can be chosen in hindsight. Encouraging empirical results on a variety of real-world datasets confirm the effectiveness and efficiency of the proposed algorithms. ER -
APA
Zhao, P., Yang, J., Zhang, T. & Li, P.. (2015). Adaptive Stochastic Alternating Direction Method of Multipliers. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:69-77 Available from https://proceedings.mlr.press/v37/zhaob15.html.

Related Material