Approximate Steepest Coordinate Descent

Sebastian U. Stich, Anant Raj, Martin Jaggi
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3251-3259, 2017.

Abstract

We propose a new selection rule for the coordinate selection in coordinate descent methods for huge-scale optimization. The efficiency of this novel scheme is provably better than the efficiency of uniformly random selection, and can reach the efficiency of steepest coordinate descent (SCD), enabling an acceleration of a factor of up to $n$, the number of coordinates. In many practical applications, our scheme can be implemented at no extra cost and computational efficiency very close to the faster uniform selection. Numerical experiments with Lasso and Ridge regression show promising improvements, in line with our theoretical guarantees.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-stich17a, title = {Approximate Steepest Coordinate Descent}, author = {Sebastian U. Stich and Anant Raj and Martin Jaggi}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3251--3259}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/stich17a/stich17a.pdf}, url = {https://proceedings.mlr.press/v70/stich17a.html}, abstract = {We propose a new selection rule for the coordinate selection in coordinate descent methods for huge-scale optimization. The efficiency of this novel scheme is provably better than the efficiency of uniformly random selection, and can reach the efficiency of steepest coordinate descent (SCD), enabling an acceleration of a factor of up to $n$, the number of coordinates. In many practical applications, our scheme can be implemented at no extra cost and computational efficiency very close to the faster uniform selection. Numerical experiments with Lasso and Ridge regression show promising improvements, in line with our theoretical guarantees.} }
Endnote
%0 Conference Paper %T Approximate Steepest Coordinate Descent %A Sebastian U. Stich %A Anant Raj %A Martin Jaggi %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-stich17a %I PMLR %P 3251--3259 %U https://proceedings.mlr.press/v70/stich17a.html %V 70 %X We propose a new selection rule for the coordinate selection in coordinate descent methods for huge-scale optimization. The efficiency of this novel scheme is provably better than the efficiency of uniformly random selection, and can reach the efficiency of steepest coordinate descent (SCD), enabling an acceleration of a factor of up to $n$, the number of coordinates. In many practical applications, our scheme can be implemented at no extra cost and computational efficiency very close to the faster uniform selection. Numerical experiments with Lasso and Ridge regression show promising improvements, in line with our theoretical guarantees.
APA
Stich, S.U., Raj, A. & Jaggi, M.. (2017). Approximate Steepest Coordinate Descent. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:3251-3259 Available from https://proceedings.mlr.press/v70/stich17a.html.

Related Material