The lasso, persistence, and cross-validation

[edit]

Darren Homrighausen, Daniel McDonald ;
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):1031-1039, 2013.

Abstract

During the last fifteen years, the lasso procedure has been the target of a substantial amount of theoretical and applied research. Correspondingly, many results are known about its behavior for a fixed or optimally chosen smoothing parameter (given up to unknown constants). Much less, however, is known about the lasso’s behavior when the smoothing parameter is chosen in a data dependent way. To this end, we give the first result about the risk consistency of lasso when the smoothing parameter is chosen via cross-validation. We consider the high-dimensional setting wherein the number of predictors p=n^α, α>0 grows with the number of observations.

Related Material