Variable Selection is Hard

Dean Foster, Howard Karloff, Justin Thaler
; Proceedings of The 28th Conference on Learning Theory, PMLR 40:696-709, 2015.

Abstract

Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Foster15, title = {Variable Selection is Hard}, author = {Dean Foster and Howard Karloff and Justin Thaler}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {696--709}, year = {2015}, editor = {Peter Grünwald and Elad Hazan and Satyen Kale}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Foster15.pdf}, url = {http://proceedings.mlr.press/v40/Foster15.html}, abstract = {Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0.} }
Endnote
%0 Conference Paper %T Variable Selection is Hard %A Dean Foster %A Howard Karloff %A Justin Thaler %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Foster15 %I PMLR %J Proceedings of Machine Learning Research %P 696--709 %U http://proceedings.mlr.press %V 40 %W PMLR %X Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0.
RIS
TY - CPAPER TI - Variable Selection is Hard AU - Dean Foster AU - Howard Karloff AU - Justin Thaler BT - Proceedings of The 28th Conference on Learning Theory PY - 2015/06/26 DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Foster15 PB - PMLR SP - 696 DP - PMLR EP - 709 L1 - http://proceedings.mlr.press/v40/Foster15.pdf UR - http://proceedings.mlr.press/v40/Foster15.html AB - Variable selection for sparse linear regression is the problem of finding, given an m\times p matrix B and a target vector \bfy, a sparse vector \bfx such that B\bfx approximately equals \bfy. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k’-sparse \bfx with \|B\bfx-\bfy\|^2\le h(m,p), where k’=k⋅2^\log ^1-δ p and h(m,p)= p^C_1 m^1-C_2, where δ>0,C_1>0,C_2>0 are arbitrary. This is true even under the promise that there is an unknown k-sparse vector \bfx^* satisfying B\bfx^*=\bfy. We prove a similar result for a statistical version of the problem in which the data are corrupted by noise. To the authors’ knowledge, these are the first hardness results for sparse regression that apply when the algorithm simultaneously has k’>k and h(m,p)>0. ER -
APA
Foster, D., Karloff, H. & Thaler, J.. (2015). Variable Selection is Hard. Proceedings of The 28th Conference on Learning Theory, in PMLR 40:696-709

Related Material