Screening rules for Lasso with non-convex Sparse Regularizers

Alain Rakotomamonjy, Gilles Gasso, Joseph Salmon
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5341-5350, 2019.

Abstract

Leveraging on the convexity of the Lasso problem, screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process. However, because they provide better theoretical guarantees in identifying relevant variables, several non-convex regularizers for the Lasso have been proposed in the literature. This work is the first that introduces a screening rule strategy into a non-convex Lasso solver. The approach we propose is based on a iterative majorization-minimization (MM) strategy that includes a screening rule in the inner solver and a condition for propagating screened variables between iterations of MM. In addition to improve efficiency of solvers, we also provide guarantees that the inner solver is able to identify the zeros components of its critical point in finite time. Our experimental analysis illustrates the significant computational gain brought by the new screening rule compared to classical coordinate-descent or proximal gradient descent methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-rakotomamonjy19a, title = {Screening rules for Lasso with non-convex Sparse Regularizers}, author = {Rakotomamonjy, Alain and Gasso, Gilles and Salmon, Joseph}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5341--5350}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/rakotomamonjy19a/rakotomamonjy19a.pdf}, url = {https://proceedings.mlr.press/v97/rakotomamonjy19a.html}, abstract = {Leveraging on the convexity of the Lasso problem, screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process. However, because they provide better theoretical guarantees in identifying relevant variables, several non-convex regularizers for the Lasso have been proposed in the literature. This work is the first that introduces a screening rule strategy into a non-convex Lasso solver. The approach we propose is based on a iterative majorization-minimization (MM) strategy that includes a screening rule in the inner solver and a condition for propagating screened variables between iterations of MM. In addition to improve efficiency of solvers, we also provide guarantees that the inner solver is able to identify the zeros components of its critical point in finite time. Our experimental analysis illustrates the significant computational gain brought by the new screening rule compared to classical coordinate-descent or proximal gradient descent methods.} }
Endnote
%0 Conference Paper %T Screening rules for Lasso with non-convex Sparse Regularizers %A Alain Rakotomamonjy %A Gilles Gasso %A Joseph Salmon %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-rakotomamonjy19a %I PMLR %P 5341--5350 %U https://proceedings.mlr.press/v97/rakotomamonjy19a.html %V 97 %X Leveraging on the convexity of the Lasso problem, screening rules help in accelerating solvers by discarding irrelevant variables, during the optimization process. However, because they provide better theoretical guarantees in identifying relevant variables, several non-convex regularizers for the Lasso have been proposed in the literature. This work is the first that introduces a screening rule strategy into a non-convex Lasso solver. The approach we propose is based on a iterative majorization-minimization (MM) strategy that includes a screening rule in the inner solver and a condition for propagating screened variables between iterations of MM. In addition to improve efficiency of solvers, we also provide guarantees that the inner solver is able to identify the zeros components of its critical point in finite time. Our experimental analysis illustrates the significant computational gain brought by the new screening rule compared to classical coordinate-descent or proximal gradient descent methods.
APA
Rakotomamonjy, A., Gasso, G. & Salmon, J.. (2019). Screening rules for Lasso with non-convex Sparse Regularizers. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5341-5350 Available from https://proceedings.mlr.press/v97/rakotomamonjy19a.html.

Related Material