Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization

Qi Lei, Ian En-Hsu Yen, Chao-yuan Wu, Inderjit S. Dhillon, Pradeep Ravikumar
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2034-2042, 2017.

Abstract

We consider the popular problem of sparse empirical risk minimization with linear predictors and a large number of both features and observations. With a convex-concave saddle point objective reformulation, we propose a Doubly Greedy Primal-Dual Coordinate Descent algorithm that is able to exploit sparsity in both primal and dual variables. It enjoys a low cost per iteration and our theoretical analysis shows that it converges linearly with a good iteration complexity, provided that the set of primal variables is sparse. We then extend this algorithm further to leverage active sets. The resulting new algorithm is even faster, and experiments on large-scale Multi-class data sets show that our algorithm achieves up to 30 times speedup on several state-of-the-art optimization methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-lei17b, title = {Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization}, author = {Qi Lei and Ian En-Hsu Yen and Chao-yuan Wu and Inderjit S. Dhillon and Pradeep Ravikumar}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2034--2042}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/lei17b/lei17b.pdf}, url = { http://proceedings.mlr.press/v70/lei17b.html }, abstract = {We consider the popular problem of sparse empirical risk minimization with linear predictors and a large number of both features and observations. With a convex-concave saddle point objective reformulation, we propose a Doubly Greedy Primal-Dual Coordinate Descent algorithm that is able to exploit sparsity in both primal and dual variables. It enjoys a low cost per iteration and our theoretical analysis shows that it converges linearly with a good iteration complexity, provided that the set of primal variables is sparse. We then extend this algorithm further to leverage active sets. The resulting new algorithm is even faster, and experiments on large-scale Multi-class data sets show that our algorithm achieves up to 30 times speedup on several state-of-the-art optimization methods.} }
Endnote
%0 Conference Paper %T Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization %A Qi Lei %A Ian En-Hsu Yen %A Chao-yuan Wu %A Inderjit S. Dhillon %A Pradeep Ravikumar %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-lei17b %I PMLR %P 2034--2042 %U http://proceedings.mlr.press/v70/lei17b.html %V 70 %X We consider the popular problem of sparse empirical risk minimization with linear predictors and a large number of both features and observations. With a convex-concave saddle point objective reformulation, we propose a Doubly Greedy Primal-Dual Coordinate Descent algorithm that is able to exploit sparsity in both primal and dual variables. It enjoys a low cost per iteration and our theoretical analysis shows that it converges linearly with a good iteration complexity, provided that the set of primal variables is sparse. We then extend this algorithm further to leverage active sets. The resulting new algorithm is even faster, and experiments on large-scale Multi-class data sets show that our algorithm achieves up to 30 times speedup on several state-of-the-art optimization methods.
APA
Lei, Q., Yen, I.E., Wu, C., Dhillon, I.S. & Ravikumar, P.. (2017). Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2034-2042 Available from http://proceedings.mlr.press/v70/lei17b.html .

Related Material