[edit]
Guaranteed Sparse Recovery under Linear Transformation
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):91-99, 2013.
Abstract
We consider the following signal recovery problem: given a measurement matrix Φ∈\mathbbR^n\times p and a noisy observation vector c∈\mathbbR^n constructed from c = Φθ^* + εwhere ε∈\mathbbR^n is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal θ^* if Dθ^* is sparse \rca under a linear transformation D∈\mathbbR^m\times p? One natural method using convex optimization is to solve the following problem: $\min_θ 1\over 2\|Φθ- c\|^2 + λ\|Dθ\|_1. This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix Φis a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of D is bounded and the measurement number n≥Ω(s\log(p)) where s is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of D is bounded and the measurement increases faster than s\log(p), that is, s\log(p)=o(n), the estimate error converges to zero with probability 1 when p and s go to infinity. Our results are consistent with those for the special case D=\boldI_p\times p (equivalently LASSO) and improve the existing analysis. The condition number of D plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if m\over p (i.e., #\textedge\over #\textvertex$) is larger than a certain constant. Numerical simulations are consistent with our theoretical results.