No penalty no tears: Least squares in high-dimensional linear models

Xiangyu Wang, David Dunson, Chenlei Leng
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1814-1822, 2016.

Abstract

Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-wange16, title = {No penalty no tears: Least squares in high-dimensional linear models}, author = {Wang, Xiangyu and Dunson, David and Leng, Chenlei}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1814--1822}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/wange16.pdf}, url = {https://proceedings.mlr.press/v48/wange16.html}, abstract = {Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.} }
Endnote
%0 Conference Paper %T No penalty no tears: Least squares in high-dimensional linear models %A Xiangyu Wang %A David Dunson %A Chenlei Leng %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-wange16 %I PMLR %P 1814--1822 %U https://proceedings.mlr.press/v48/wange16.html %V 48 %X Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.
RIS
TY - CPAPER TI - No penalty no tears: Least squares in high-dimensional linear models AU - Xiangyu Wang AU - David Dunson AU - Chenlei Leng BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-wange16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1814 EP - 1822 L1 - http://proceedings.mlr.press/v48/wange16.pdf UR - https://proceedings.mlr.press/v48/wange16.html AB - Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms. ER -
APA
Wang, X., Dunson, D. & Leng, C.. (2016). No penalty no tears: Least squares in high-dimensional linear models. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1814-1822 Available from https://proceedings.mlr.press/v48/wange16.html.

Related Material