[edit]
Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:739-747, 2015.
Abstract
In this paper, we propose a novel algorithm based on nonconvex sparsity-inducing penalty for one-bit compressed sensing. We prove that our algorithm has a sample complexity of O(s/ε^2) for strong signals, and O(s\log d/ε^2) for weak signals, where s is the number of nonzero entries in the signal vector, d is the signal dimension and εis the recovery error. For general signals, the sample complexity of our algorithm lies between O(s/ε^2) and O(s\log d/ε^2). This is a remarkable improvement over the existing best sample complexity O(s\log d/ε^2). Furthermore, we show that our algorithm achieves exact support recovery with high probability for strong signals. Our theory is verified by extensive numerical experiments, which clearly illustrate the superiority of our algorithm for both approximate signal and support recovery in the noisy setting.