[edit]
Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):392-400, 2013.
Abstract
We develop new stochastic optimization methods that are applicable to a wide range of structured regularizations. Basically our methods are combinations of basic stochastic optimization techniques and Alternating Direction Multiplier Method (ADMM). ADMM is a general framework for optimizing a composite function, and has a wide range of applications. We propose two types of online variants of ADMM, which correspond to online proximal gradient descent and regularized dual averaging respectively. The proposed algorithms are computationally efficient and easy to implement. Our methods yield O(1/\sqrtT) convergence of the expected risk. Moreover, the online proximal gradient descent type method yields O(\log(T)/T) convergence for a strongly convex loss. Numerical experiments show effectiveness of our methods in learning tasks with structured sparsity such as overlapped group lasso.