Guaranteed Sparse Recovery under Linear Transformation

Ji Liu, Lei Yuan, Jieping Ye
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):91-99, 2013.

Abstract

We consider the following signal recovery problem: given a measurement matrix Φ∈\mathbbR^n\times p and a noisy observation vector c∈\mathbbR^n constructed from c = Φθ^* + εwhere ε∈\mathbbR^n is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal θ^* if Dθ^* is sparse \rca under a linear transformation D∈\mathbbR^m\times p? One natural method using convex optimization is to solve the following problem: $\min_θ 1\over 2\|Φθ- c\|^2 + λ\|Dθ\|_1. This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix Φis a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of D is bounded and the measurement number n≥Ω(s\log(p)) where s is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of D is bounded and the measurement increases faster than s\log(p), that is, s\log(p)=o(n), the estimate error converges to zero with probability 1 when p and s go to infinity. Our results are consistent with those for the special case D=\boldI_p\times p (equivalently LASSO) and improve the existing analysis. The condition number of D plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if m\over p (i.e., #\textedge\over #\textvertex$) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-liu13, title = {Guaranteed Sparse Recovery under Linear Transformation}, author = {Liu, Ji and Yuan, Lei and Ye, Jieping}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {91--99}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/liu13.pdf}, url = {https://proceedings.mlr.press/v28/liu13.html}, abstract = {We consider the following signal recovery problem: given a measurement matrix Φ∈\mathbbR^n\times p and a noisy observation vector c∈\mathbbR^n constructed from c = Φθ^* + εwhere ε∈\mathbbR^n is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal θ^* if Dθ^* is sparse \rca under a linear transformation D∈\mathbbR^m\times p? One natural method using convex optimization is to solve the following problem: $\min_θ 1\over 2\|Φθ- c\|^2 + λ\|Dθ\|_1. This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix Φis a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of D is bounded and the measurement number n≥Ω(s\log(p)) where s is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of D is bounded and the measurement increases faster than s\log(p), that is, s\log(p)=o(n), the estimate error converges to zero with probability 1 when p and s go to infinity. Our results are consistent with those for the special case D=\boldI_p\times p (equivalently LASSO) and improve the existing analysis. The condition number of D plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if m\over p (i.e., #\textedge\over #\textvertex$) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.} }
Endnote
%0 Conference Paper %T Guaranteed Sparse Recovery under Linear Transformation %A Ji Liu %A Lei Yuan %A Jieping Ye %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-liu13 %I PMLR %P 91--99 %U https://proceedings.mlr.press/v28/liu13.html %V 28 %N 3 %X We consider the following signal recovery problem: given a measurement matrix Φ∈\mathbbR^n\times p and a noisy observation vector c∈\mathbbR^n constructed from c = Φθ^* + εwhere ε∈\mathbbR^n is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal θ^* if Dθ^* is sparse \rca under a linear transformation D∈\mathbbR^m\times p? One natural method using convex optimization is to solve the following problem: $\min_θ 1\over 2\|Φθ- c\|^2 + λ\|Dθ\|_1. This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix Φis a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of D is bounded and the measurement number n≥Ω(s\log(p)) where s is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of D is bounded and the measurement increases faster than s\log(p), that is, s\log(p)=o(n), the estimate error converges to zero with probability 1 when p and s go to infinity. Our results are consistent with those for the special case D=\boldI_p\times p (equivalently LASSO) and improve the existing analysis. The condition number of D plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if m\over p (i.e., #\textedge\over #\textvertex$) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.
RIS
TY - CPAPER TI - Guaranteed Sparse Recovery under Linear Transformation AU - Ji Liu AU - Lei Yuan AU - Jieping Ye BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-liu13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 91 EP - 99 L1 - http://proceedings.mlr.press/v28/liu13.pdf UR - https://proceedings.mlr.press/v28/liu13.html AB - We consider the following signal recovery problem: given a measurement matrix Φ∈\mathbbR^n\times p and a noisy observation vector c∈\mathbbR^n constructed from c = Φθ^* + εwhere ε∈\mathbbR^n is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal θ^* if Dθ^* is sparse \rca under a linear transformation D∈\mathbbR^m\times p? One natural method using convex optimization is to solve the following problem: $\min_θ 1\over 2\|Φθ- c\|^2 + λ\|Dθ\|_1. This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix Φis a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of D is bounded and the measurement number n≥Ω(s\log(p)) where s is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of D is bounded and the measurement increases faster than s\log(p), that is, s\log(p)=o(n), the estimate error converges to zero with probability 1 when p and s go to infinity. Our results are consistent with those for the special case D=\boldI_p\times p (equivalently LASSO) and improve the existing analysis. The condition number of D plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if m\over p (i.e., #\textedge\over #\textvertex$) is larger than a certain constant. Numerical simulations are consistent with our theoretical results. ER -
APA
Liu, J., Yuan, L. & Ye, J.. (2013). Guaranteed Sparse Recovery under Linear Transformation. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):91-99 Available from https://proceedings.mlr.press/v28/liu13.html.

Related Material