Efficient Designs Of SLOPE Penalty Sequences In Finite Dimension

Yiliang Zhang, Zhiqi Bu
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3277-3285, 2021.

Abstract

In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted $\ell_1$ penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence $\blam$, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the $k$-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-zhang21k, title = { Efficient Designs Of SLOPE Penalty Sequences In Finite Dimension }, author = {Zhang, Yiliang and Bu, Zhiqi}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {3277--3285}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/zhang21k/zhang21k.pdf}, url = {https://proceedings.mlr.press/v130/zhang21k.html}, abstract = { In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted $\ell_1$ penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence $\blam$, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the $k$-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets. } }
Endnote
%0 Conference Paper %T Efficient Designs Of SLOPE Penalty Sequences In Finite Dimension %A Yiliang Zhang %A Zhiqi Bu %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-zhang21k %I PMLR %P 3277--3285 %U https://proceedings.mlr.press/v130/zhang21k.html %V 130 %X In linear regression, SLOPE is a new convex analysis method that generalizes the Lasso via the sorted $\ell_1$ penalty: larger fitted coefficients are penalized more heavily. This magnitude-dependent regularization requires an input of penalty sequence $\blam$, instead of a scalar penalty as in the Lasso case, thus making the design extremely expensive in computation. In this paper, we propose two efficient algorithms to design the possibly high-dimensional SLOPE penalty, in order to minimize the mean squared error. For Gaussian data matrices, we propose a first order Projected Gradient Descent (PGD) under the Approximate Message Passing regime. For general data matrices, we present a zero-th order Coordinate Descent (CD) to design a sub-class of SLOPE, referred to as the $k$-level SLOPE. Our CD allows a useful trade-off between the accuracy and the computation speed. We demonstrate the performance of SLOPE with our designs via extensive experiments on synthetic data and real-world datasets.
APA
Zhang, Y. & Bu, Z.. (2021). Efficient Designs Of SLOPE Penalty Sequences In Finite Dimension . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:3277-3285 Available from https://proceedings.mlr.press/v130/zhang21k.html.

Related Material