Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings

Haolin Zou, Arnab Auddy, Kamiar Rahnama Rad, Arian Maleki
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4033-4041, 2025.

Abstract

Despite a large and significant body of recent work focusing on the hyperparameter tuning of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing. In this paper we resolve this challenge. We study the hyperparameter tuning problem in the proportional high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ and the signal-to-noise ratio (per observation) remain finite. To achieve this goal, we first provide finite-sample upper bounds on the expected squared error of leave-one-out cross-validation (LO) in estimating the out-of-sample risk. Building on this result, we establish the consistency of the hyperparameter tuning method that is based on minimizing LO’s estimate. Our simulation results confirm the accuracy and sharpness of our theoretical results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-zou25b, title = {Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings}, author = {Zou, Haolin and Auddy, Arnab and Rad, Kamiar Rahnama and Maleki, Arian}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4033--4041}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/zou25b/zou25b.pdf}, url = {https://proceedings.mlr.press/v258/zou25b.html}, abstract = {Despite a large and significant body of recent work focusing on the hyperparameter tuning of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing. In this paper we resolve this challenge. We study the hyperparameter tuning problem in the proportional high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ and the signal-to-noise ratio (per observation) remain finite. To achieve this goal, we first provide finite-sample upper bounds on the expected squared error of leave-one-out cross-validation (LO) in estimating the out-of-sample risk. Building on this result, we establish the consistency of the hyperparameter tuning method that is based on minimizing LO’s estimate. Our simulation results confirm the accuracy and sharpness of our theoretical results.} }
Endnote
%0 Conference Paper %T Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings %A Haolin Zou %A Arnab Auddy %A Kamiar Rahnama Rad %A Arian Maleki %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-zou25b %I PMLR %P 4033--4041 %U https://proceedings.mlr.press/v258/zou25b.html %V 258 %X Despite a large and significant body of recent work focusing on the hyperparameter tuning of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing. In this paper we resolve this challenge. We study the hyperparameter tuning problem in the proportional high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ and the signal-to-noise ratio (per observation) remain finite. To achieve this goal, we first provide finite-sample upper bounds on the expected squared error of leave-one-out cross-validation (LO) in estimating the out-of-sample risk. Building on this result, we establish the consistency of the hyperparameter tuning method that is based on minimizing LO’s estimate. Our simulation results confirm the accuracy and sharpness of our theoretical results.
APA
Zou, H., Auddy, A., Rad, K.R. & Maleki, A.. (2025). Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4033-4041 Available from https://proceedings.mlr.press/v258/zou25b.html.

Related Material