Elementary Estimators for High-Dimensional Linear Regression

Eunho Yang, Aurelie Lozano, Pradeep Ravikumar
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):388-396, 2014.

Abstract

We consider the problem of structurally constrained high-dimensional linear regression. This has attracted considerable attention over the last decade, with state of the art statistical estimators based on solving regularized convex programs. While these typically non-smooth convex programs can be solved in polynomial time, scaling the state of the art optimization methods to very large-scale problems is an ongoing and rich area of research. In this paper, we attempt to address this scaling issue at the source, by asking whether one can build \emphsimpler possibly closed-form estimators, that yet come with statistical guarantees that are nonetheless comparable to regularized likelihood estimators! We answer this question in the affirmative, with variants of the classical ridge and OLS (ordinary least squares estimators) for linear regression. We analyze our estimators in the high-dimensional setting, and moreover provide empirical corroboration of its performance on simulated as well as real world microarray data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-yangc14, title = {Elementary Estimators for High-Dimensional Linear Regression}, author = {Eunho Yang and Aurelie Lozano and Pradeep Ravikumar}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {388--396}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/yangc14.pdf}, url = {http://proceedings.mlr.press/v32/yangc14.html}, abstract = {We consider the problem of structurally constrained high-dimensional linear regression. This has attracted considerable attention over the last decade, with state of the art statistical estimators based on solving regularized convex programs. While these typically non-smooth convex programs can be solved in polynomial time, scaling the state of the art optimization methods to very large-scale problems is an ongoing and rich area of research. In this paper, we attempt to address this scaling issue at the source, by asking whether one can build \emphsimpler possibly closed-form estimators, that yet come with statistical guarantees that are nonetheless comparable to regularized likelihood estimators! We answer this question in the affirmative, with variants of the classical ridge and OLS (ordinary least squares estimators) for linear regression. We analyze our estimators in the high-dimensional setting, and moreover provide empirical corroboration of its performance on simulated as well as real world microarray data.} }
Endnote
%0 Conference Paper %T Elementary Estimators for High-Dimensional Linear Regression %A Eunho Yang %A Aurelie Lozano %A Pradeep Ravikumar %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-yangc14 %I PMLR %J Proceedings of Machine Learning Research %P 388--396 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X We consider the problem of structurally constrained high-dimensional linear regression. This has attracted considerable attention over the last decade, with state of the art statistical estimators based on solving regularized convex programs. While these typically non-smooth convex programs can be solved in polynomial time, scaling the state of the art optimization methods to very large-scale problems is an ongoing and rich area of research. In this paper, we attempt to address this scaling issue at the source, by asking whether one can build \emphsimpler possibly closed-form estimators, that yet come with statistical guarantees that are nonetheless comparable to regularized likelihood estimators! We answer this question in the affirmative, with variants of the classical ridge and OLS (ordinary least squares estimators) for linear regression. We analyze our estimators in the high-dimensional setting, and moreover provide empirical corroboration of its performance on simulated as well as real world microarray data.
RIS
TY - CPAPER TI - Elementary Estimators for High-Dimensional Linear Regression AU - Eunho Yang AU - Aurelie Lozano AU - Pradeep Ravikumar BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-yangc14 PB - PMLR SP - 388 DP - PMLR EP - 396 L1 - http://proceedings.mlr.press/v32/yangc14.pdf UR - http://proceedings.mlr.press/v32/yangc14.html AB - We consider the problem of structurally constrained high-dimensional linear regression. This has attracted considerable attention over the last decade, with state of the art statistical estimators based on solving regularized convex programs. While these typically non-smooth convex programs can be solved in polynomial time, scaling the state of the art optimization methods to very large-scale problems is an ongoing and rich area of research. In this paper, we attempt to address this scaling issue at the source, by asking whether one can build \emphsimpler possibly closed-form estimators, that yet come with statistical guarantees that are nonetheless comparable to regularized likelihood estimators! We answer this question in the affirmative, with variants of the classical ridge and OLS (ordinary least squares estimators) for linear regression. We analyze our estimators in the high-dimensional setting, and moreover provide empirical corroboration of its performance on simulated as well as real world microarray data. ER -
APA
Yang, E., Lozano, A. & Ravikumar, P.. (2014). Elementary Estimators for High-Dimensional Linear Regression. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):388-396

Related Material