High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition

Gamarnik David, Zadik Ilias
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:948-953, 2017.

Abstract

We consider a sparse linear regression model $Y=Xβ^*+W$ where $X$ is $n\times p$ matrix Gaussian i.i.d. entries, $W$ is $n\times 1$ noise vector with i.i.d. mean zero Gaussian entries and standard deviation $σ$, and $β^*$ is $p\times 1$ binary vector with support size (sparsity) $k$. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error $\min_β\|Y-Xβ\|_2$, where the minimization is over all $k$-sparse binary vectors $β$. The approximation reveals interesting structural properties of the underlying regression problem. In particular, \beginenumerate \item [(a)] We establish that $n^*=2k\log p/\log (2k/σ^2+1)$ is a phase transition point with the following “all-or-nothing” property. When $n$ exceeds $n^*$, $(2k)^-1\|\beta_2-β^*\|_0≈0$, and when $n$ is below $n^*$, $(2k)^-1\|\beta_2-β^*\|_0≈1$, where $\beta_2$ is the optimal solution achieving the smallest squared error. As a corollary $n^*$ is the asymptotic threshold for recovering $β^*$ information theoretically. Note that $n^*$ is asymptotically below the threshold $n_\text{LASSO}/CS=(2k+σ^2)\log p$, above which the LASSO and Compressive Sensing methods are able to recover $β^*$. \item [(b)] We compute the squared error for an intermediate problem $\min_β\|Y-Xβ\|_2$ where the minimization is restricted to vectors $β$ with $\|β-β^*\|_0=2k ζ$, for some fixed ratio $ζ∈[0,1]$. We show that a lower bound part $Γ(ζ)$ of the estimate, which essentially corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely $n_\text{inf},1=σ^2\log p$, which is information theoretic bound for recovering $β^*$ when $k=1$ and $σ$ is large, then at $n^*$ and finally at $n_\text{LASSO}/CS$. \item [(c)] We establish a certain Overlap Gap Property (OGP) on the space of all $k$-sparse binary vectors $β$ when $n\le ck\log p$ for sufficiently small constant $c$. By drawing a connection with a similar OGP exhibited by many randomly generated constraint satisfaction problems and statistical physics models, we conjecture that OGP is the source of algorithmic hardness of solving the minimization problem $\min_β\|Y-Xβ\|_2$ in the regime $n

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-david17a, title = {High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition}, author = {David, Gamarnik and Ilias, Zadik}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {948--953}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/david17a/david17a.pdf}, url = {https://proceedings.mlr.press/v65/david17a.html}, abstract = {We consider a sparse linear regression model $Y=Xβ^*+W$ where $X$ is $n\times p$ matrix Gaussian i.i.d. entries, $W$ is $n\times 1$ noise vector with i.i.d. mean zero Gaussian entries and standard deviation $σ$, and $β^*$ is $p\times 1$ binary vector with support size (sparsity) $k$. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error $\min_β\|Y-Xβ\|_2$, where the minimization is over all $k$-sparse binary vectors $β$. The approximation reveals interesting structural properties of the underlying regression problem. In particular, \beginenumerate \item [(a)] We establish that $n^*=2k\log p/\log (2k/σ^2+1)$ is a phase transition point with the following “all-or-nothing” property. When $n$ exceeds $n^*$, $(2k)^-1\|\beta_2-β^*\|_0≈0$, and when $n$ is below $n^*$, $(2k)^-1\|\beta_2-β^*\|_0≈1$, where $\beta_2$ is the optimal solution achieving the smallest squared error. As a corollary $n^*$ is the asymptotic threshold for recovering $β^*$ information theoretically. Note that $n^*$ is asymptotically below the threshold $n_\text{LASSO}/CS=(2k+σ^2)\log p$, above which the LASSO and Compressive Sensing methods are able to recover $β^*$. \item [(b)] We compute the squared error for an intermediate problem $\min_β\|Y-Xβ\|_2$ where the minimization is restricted to vectors $β$ with $\|β-β^*\|_0=2k ζ$, for some fixed ratio $ζ∈[0,1]$. We show that a lower bound part $Γ(ζ)$ of the estimate, which essentially corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely $n_\text{inf},1=σ^2\log p$, which is information theoretic bound for recovering $β^*$ when $k=1$ and $σ$ is large, then at $n^*$ and finally at $n_\text{LASSO}/CS$. \item [(c)] We establish a certain Overlap Gap Property (OGP) on the space of all $k$-sparse binary vectors $β$ when $n\le ck\log p$ for sufficiently small constant $c$. By drawing a connection with a similar OGP exhibited by many randomly generated constraint satisfaction problems and statistical physics models, we conjecture that OGP is the source of algorithmic hardness of solving the minimization problem $\min_β\|Y-Xβ\|_2$ in the regime $n
Endnote
%0 Conference Paper %T High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition %A Gamarnik David %A Zadik Ilias %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-david17a %I PMLR %P 948--953 %U https://proceedings.mlr.press/v65/david17a.html %V 65 %X We consider a sparse linear regression model $Y=Xβ^*+W$ where $X$ is $n\times p$ matrix Gaussian i.i.d. entries, $W$ is $n\times 1$ noise vector with i.i.d. mean zero Gaussian entries and standard deviation $σ$, and $β^*$ is $p\times 1$ binary vector with support size (sparsity) $k$. Using a novel conditional second moment method we obtain a tight up to a multiplicative constant approximation of the optimal squared error $\min_β\|Y-Xβ\|_2$, where the minimization is over all $k$-sparse binary vectors $β$. The approximation reveals interesting structural properties of the underlying regression problem. In particular, \beginenumerate \item [(a)] We establish that $n^*=2k\log p/\log (2k/σ^2+1)$ is a phase transition point with the following “all-or-nothing” property. When $n$ exceeds $n^*$, $(2k)^-1\|\beta_2-β^*\|_0≈0$, and when $n$ is below $n^*$, $(2k)^-1\|\beta_2-β^*\|_0≈1$, where $\beta_2$ is the optimal solution achieving the smallest squared error. As a corollary $n^*$ is the asymptotic threshold for recovering $β^*$ information theoretically. Note that $n^*$ is asymptotically below the threshold $n_\text{LASSO}/CS=(2k+σ^2)\log p$, above which the LASSO and Compressive Sensing methods are able to recover $β^*$. \item [(b)] We compute the squared error for an intermediate problem $\min_β\|Y-Xβ\|_2$ where the minimization is restricted to vectors $β$ with $\|β-β^*\|_0=2k ζ$, for some fixed ratio $ζ∈[0,1]$. We show that a lower bound part $Γ(ζ)$ of the estimate, which essentially corresponds to the estimate based on the first moment method, undergoes a phase transition at three different thresholds, namely $n_\text{inf},1=σ^2\log p$, which is information theoretic bound for recovering $β^*$ when $k=1$ and $σ$ is large, then at $n^*$ and finally at $n_\text{LASSO}/CS$. \item [(c)] We establish a certain Overlap Gap Property (OGP) on the space of all $k$-sparse binary vectors $β$ when $n\le ck\log p$ for sufficiently small constant $c$. By drawing a connection with a similar OGP exhibited by many randomly generated constraint satisfaction problems and statistical physics models, we conjecture that OGP is the source of algorithmic hardness of solving the minimization problem $\min_β\|Y-Xβ\|_2$ in the regime $n
APA
David, G. & Ilias, Z.. (2017). High Dimensional Regression with Binary Coefficients. Estimating Squared Error and a Phase Transtition. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:948-953 Available from https://proceedings.mlr.press/v65/david17a.html.

Related Material