The Well-Tempered Lasso

Yuanzhi Li, Yoram Singer
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3024-3032, 2018.

Abstract

We study the complexity of the entire regularization path for least squares regression with 1-norm penalty, known as the Lasso. Every regression parameter in the Lasso changes linearly as a function of the regularization value. The number of changes is regarded as the Lasso’s complexity. Experimental results using exact path following exhibit polynomial complexity of the Lasso in the problem size. Alas, the path complexity of the Lasso on artificially designed regression problems is exponential We use smoothed analysis as a mechanism for bridging the gap between worst case settings and the de facto low complexity. Our analysis assumes that the observed data has a tiny amount of intrinsic noise. We then prove that the Lasso’s complexity is polynomial in the problem size.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-li18f, title = {The Well-Tempered Lasso}, author = {Li, Yuanzhi and Singer, Yoram}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3024--3032}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/li18f/li18f.pdf}, url = {https://proceedings.mlr.press/v80/li18f.html}, abstract = {We study the complexity of the entire regularization path for least squares regression with 1-norm penalty, known as the Lasso. Every regression parameter in the Lasso changes linearly as a function of the regularization value. The number of changes is regarded as the Lasso’s complexity. Experimental results using exact path following exhibit polynomial complexity of the Lasso in the problem size. Alas, the path complexity of the Lasso on artificially designed regression problems is exponential We use smoothed analysis as a mechanism for bridging the gap between worst case settings and the de facto low complexity. Our analysis assumes that the observed data has a tiny amount of intrinsic noise. We then prove that the Lasso’s complexity is polynomial in the problem size.} }
Endnote
%0 Conference Paper %T The Well-Tempered Lasso %A Yuanzhi Li %A Yoram Singer %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-li18f %I PMLR %P 3024--3032 %U https://proceedings.mlr.press/v80/li18f.html %V 80 %X We study the complexity of the entire regularization path for least squares regression with 1-norm penalty, known as the Lasso. Every regression parameter in the Lasso changes linearly as a function of the regularization value. The number of changes is regarded as the Lasso’s complexity. Experimental results using exact path following exhibit polynomial complexity of the Lasso in the problem size. Alas, the path complexity of the Lasso on artificially designed regression problems is exponential We use smoothed analysis as a mechanism for bridging the gap between worst case settings and the de facto low complexity. Our analysis assumes that the observed data has a tiny amount of intrinsic noise. We then prove that the Lasso’s complexity is polynomial in the problem size.
APA
Li, Y. & Singer, Y.. (2018). The Well-Tempered Lasso. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3024-3032 Available from https://proceedings.mlr.press/v80/li18f.html.

Related Material