Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence

Claudio Mayrink Verdun, Oleh Melnyk, Felix Krahmer, Peter Jung
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3823-3872, 2024.

Abstract

Many algorithms for high-dimensional regression problems require the calibration of regularization hyperparameters. This, in turn, often requires the knowledge of the unknown noise variance in order to produce meaningful solutions. Recent works show, however, that there exist certain estimators that are pivotal, i.e., the regularization parameter does not depend on the noise level; the most remarkable example being the square-root lasso. Such estimators have also been shown to exhibit strong connections to distributionally robust optimization. Despite the progress in the design of pivotal estimators, the resulting minimization problem is challenging as both the loss function and the regularization term are non-smooth. To date, the design of fast, robust, and scalable algorithms with strong convergence rate guarantees is still an open problem. This work addresses this problem by showing that an iteratively reweighted least squares (IRLS) algorithm exhibits global linear convergence under the weakest assumption available in the literature. We expect our findings will also have implications for multi-task learning and distributionally robust optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-mayrink-verdun24a, title = {Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence}, author = {Mayrink Verdun, Claudio and Melnyk, Oleh and Krahmer, Felix and Jung, Peter}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {3823--3872}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/mayrink-verdun24a/mayrink-verdun24a.pdf}, url = {https://proceedings.mlr.press/v247/mayrink-verdun24a.html}, abstract = {Many algorithms for high-dimensional regression problems require the calibration of regularization hyperparameters. This, in turn, often requires the knowledge of the unknown noise variance in order to produce meaningful solutions. Recent works show, however, that there exist certain estimators that are pivotal, i.e., the regularization parameter does not depend on the noise level; the most remarkable example being the square-root lasso. Such estimators have also been shown to exhibit strong connections to distributionally robust optimization. Despite the progress in the design of pivotal estimators, the resulting minimization problem is challenging as both the loss function and the regularization term are non-smooth. To date, the design of fast, robust, and scalable algorithms with strong convergence rate guarantees is still an open problem. This work addresses this problem by showing that an iteratively reweighted least squares (IRLS) algorithm exhibits global linear convergence under the weakest assumption available in the literature. We expect our findings will also have implications for multi-task learning and distributionally robust optimization.} }
Endnote
%0 Conference Paper %T Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence %A Claudio Mayrink Verdun %A Oleh Melnyk %A Felix Krahmer %A Peter Jung %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-mayrink-verdun24a %I PMLR %P 3823--3872 %U https://proceedings.mlr.press/v247/mayrink-verdun24a.html %V 247 %X Many algorithms for high-dimensional regression problems require the calibration of regularization hyperparameters. This, in turn, often requires the knowledge of the unknown noise variance in order to produce meaningful solutions. Recent works show, however, that there exist certain estimators that are pivotal, i.e., the regularization parameter does not depend on the noise level; the most remarkable example being the square-root lasso. Such estimators have also been shown to exhibit strong connections to distributionally robust optimization. Despite the progress in the design of pivotal estimators, the resulting minimization problem is challenging as both the loss function and the regularization term are non-smooth. To date, the design of fast, robust, and scalable algorithms with strong convergence rate guarantees is still an open problem. This work addresses this problem by showing that an iteratively reweighted least squares (IRLS) algorithm exhibits global linear convergence under the weakest assumption available in the literature. We expect our findings will also have implications for multi-task learning and distributionally robust optimization.
APA
Mayrink Verdun, C., Melnyk, O., Krahmer, F. & Jung, P.. (2024). Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:3823-3872 Available from https://proceedings.mlr.press/v247/mayrink-verdun24a.html.

Related Material