Variable Selection is Hard

Dean Foster, Howard Karloff, Justin Thaler
Proceedings of The 28th Conference on Learning Theory, PMLR 40:696-709, 2015.

Abstract

Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Foster15, title = {Variable Selection is Hard}, author = {Foster, Dean and Karloff, Howard and Thaler, Justin}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {696--709}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Foster15.pdf}, url = {https://proceedings.mlr.press/v40/Foster15.html}, abstract = {Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0.} }
Endnote
%0 Conference Paper %T Variable Selection is Hard %A Dean Foster %A Howard Karloff %A Justin Thaler %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Foster15 %I PMLR %P 696--709 %U https://proceedings.mlr.press/v40/Foster15.html %V 40 %X Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0.
RIS
TY - CPAPER TI - Variable Selection is Hard AU - Dean Foster AU - Howard Karloff AU - Justin Thaler BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Foster15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 696 EP - 709 L1 - http://proceedings.mlr.press/v40/Foster15.pdf UR - https://proceedings.mlr.press/v40/Foster15.html AB - Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0. ER -
APA
Foster, D., Karloff, H. & Thaler, J.. (2015). Variable Selection is Hard. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:696-709 Available from https://proceedings.mlr.press/v40/Foster15.html.

Related Material