Efficient Algorithms for Robust One-bit Compressive Sensing

Lijun Zhang, Jinfeng Yi, Rong Jin
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):820-828, 2014.

Abstract

While the conventional compressive sensing assumes measurements of infinite precision, one-bit compressive sensing considers an extreme setting where each measurement is quantized to just a single bit. In this paper, we study the vector recovery problem from noisy one-bit measurements, and develop two novel algorithms with formal theoretical guarantees. First, we propose a passive algorithm, which is very efficient in the sense it only needs to solve a convex optimization problem that has a closed-form solution. Despite the apparent simplicity, our theoretical analysis reveals that the proposed algorithm can recover both the exactly sparse and the approximately sparse vectors. In particular, for a sparse vector with s nonzero elements, the sample complexity is O(s \log n/ε^2), where n is the dimensionality and εis the recovery error. This result improves significantly over the previously best known sample complexity in the noisy setting, which is O(s \log n/ε^4). Second, in the case that the noise model is known, we develop an adaptive algorithm based on the principle of active learning. The key idea is to solicit the sign information only when it cannot be inferred from the current estimator. Compared with the passive algorithm, the adaptive one has a lower sample complexity if a high-precision solution is desired.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-zhangc14, title = {Efficient Algorithms for Robust One-bit Compressive Sensing}, author = {Zhang, Lijun and Yi, Jinfeng and Jin, Rong}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {820--828}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/zhangc14.pdf}, url = {https://proceedings.mlr.press/v32/zhangc14.html}, abstract = {While the conventional compressive sensing assumes measurements of infinite precision, one-bit compressive sensing considers an extreme setting where each measurement is quantized to just a single bit. In this paper, we study the vector recovery problem from noisy one-bit measurements, and develop two novel algorithms with formal theoretical guarantees. First, we propose a passive algorithm, which is very efficient in the sense it only needs to solve a convex optimization problem that has a closed-form solution. Despite the apparent simplicity, our theoretical analysis reveals that the proposed algorithm can recover both the exactly sparse and the approximately sparse vectors. In particular, for a sparse vector with s nonzero elements, the sample complexity is O(s \log n/ε^2), where n is the dimensionality and εis the recovery error. This result improves significantly over the previously best known sample complexity in the noisy setting, which is O(s \log n/ε^4). Second, in the case that the noise model is known, we develop an adaptive algorithm based on the principle of active learning. The key idea is to solicit the sign information only when it cannot be inferred from the current estimator. Compared with the passive algorithm, the adaptive one has a lower sample complexity if a high-precision solution is desired.} }
Endnote
%0 Conference Paper %T Efficient Algorithms for Robust One-bit Compressive Sensing %A Lijun Zhang %A Jinfeng Yi %A Rong Jin %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-zhangc14 %I PMLR %P 820--828 %U https://proceedings.mlr.press/v32/zhangc14.html %V 32 %N 2 %X While the conventional compressive sensing assumes measurements of infinite precision, one-bit compressive sensing considers an extreme setting where each measurement is quantized to just a single bit. In this paper, we study the vector recovery problem from noisy one-bit measurements, and develop two novel algorithms with formal theoretical guarantees. First, we propose a passive algorithm, which is very efficient in the sense it only needs to solve a convex optimization problem that has a closed-form solution. Despite the apparent simplicity, our theoretical analysis reveals that the proposed algorithm can recover both the exactly sparse and the approximately sparse vectors. In particular, for a sparse vector with s nonzero elements, the sample complexity is O(s \log n/ε^2), where n is the dimensionality and εis the recovery error. This result improves significantly over the previously best known sample complexity in the noisy setting, which is O(s \log n/ε^4). Second, in the case that the noise model is known, we develop an adaptive algorithm based on the principle of active learning. The key idea is to solicit the sign information only when it cannot be inferred from the current estimator. Compared with the passive algorithm, the adaptive one has a lower sample complexity if a high-precision solution is desired.
RIS
TY - CPAPER TI - Efficient Algorithms for Robust One-bit Compressive Sensing AU - Lijun Zhang AU - Jinfeng Yi AU - Rong Jin BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-zhangc14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 820 EP - 828 L1 - http://proceedings.mlr.press/v32/zhangc14.pdf UR - https://proceedings.mlr.press/v32/zhangc14.html AB - While the conventional compressive sensing assumes measurements of infinite precision, one-bit compressive sensing considers an extreme setting where each measurement is quantized to just a single bit. In this paper, we study the vector recovery problem from noisy one-bit measurements, and develop two novel algorithms with formal theoretical guarantees. First, we propose a passive algorithm, which is very efficient in the sense it only needs to solve a convex optimization problem that has a closed-form solution. Despite the apparent simplicity, our theoretical analysis reveals that the proposed algorithm can recover both the exactly sparse and the approximately sparse vectors. In particular, for a sparse vector with s nonzero elements, the sample complexity is O(s \log n/ε^2), where n is the dimensionality and εis the recovery error. This result improves significantly over the previously best known sample complexity in the noisy setting, which is O(s \log n/ε^4). Second, in the case that the noise model is known, we develop an adaptive algorithm based on the principle of active learning. The key idea is to solicit the sign information only when it cannot be inferred from the current estimator. Compared with the passive algorithm, the adaptive one has a lower sample complexity if a high-precision solution is desired. ER -
APA
Zhang, L., Yi, J. & Jin, R.. (2014). Efficient Algorithms for Robust One-bit Compressive Sensing. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):820-828 Available from https://proceedings.mlr.press/v32/zhangc14.html.

Related Material