Learning and 1-bit Compressed Sensing under Asymmetric Noise

Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab, Hongyang Zhang
29th Annual Conference on Learning Theory, PMLR 49:152-192, 2016.

Abstract

We study the \emphapproximate recovery problem: Given corrupted 1-bit measurements of the form sign(w^* ⋅x_i), recover a vector w that is a good approximation to w^* ∈\Re^d. This problem has been studied by both the learning theory and signal processing communities. In learning theory, this is known as the problem of \emphlearning halfspaces with noise, and in signal processing, as \emph1-bit compressed sensing, in which there is an additional assumption that w^* is t-sparse. The challenge in both cases is to design computationally efficient algorithms that are tolerant to large amounts of noise under realistic noise models. Furthermore, in the case of 1-bit compressed sensing, we require the number of measurements x_i to scale polynomially in t and only polylogarithmically in d, the ambient dimension. In this work, we introduce algorithms with nearly optimal guarantees for both problems under two realistic noise models, \emphbounded (Massart) noise and \emphadversarial (agnostic) noise, when the measurements x_i’s are drawn from any isotropic log-concave distribution. In bounded (Massart) noise, an adversary can flip the measurement of each point x with probability η(x)≤η< 1/2. For this problem, we present an efficient algorithm that returns w such that \|w- w^*\|_2 ≤εin time poly(d, \frac 1 ε) for \emphany constant η< 1/2. This improves significantly over the best known result of Awasthi et al. 2015, in this space that required the noise to be as small as η≈10^-6. We then introduce an attribute-efficient variant of this algorithm for 1-bit compressed sensing that achieves the same guarantee with poly(t, \log(d), \frac 1 ε) measurements when \|w^*\|_0≤t. For adversarial (agnostic) noise, where any νfraction of measurements can be corrupted, we provide an algorithm that returns w such that \|w-w^*\|_2 ≤O(ν) + ε, with \tildeΩ( \frac t ε^3 \polylog(d)) measurements. Our results improve on the best known approximation results in this space and under some regimes improve on the sample complexity of the existing results. Furthermore, this is the first result of its kind in 1-bit compressed sensing that goes beyond the Gaussian marginal distribution and works for any isotrpic log-concave distribution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-awasthi16, title = {Learning and 1-bit Compressed Sensing under Asymmetric Noise}, author = {Awasthi, Pranjal and Balcan, Maria-Florina and Haghtalab, Nika and Zhang, Hongyang}, booktitle = {29th Annual Conference on Learning Theory}, pages = {152--192}, year = {2016}, editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/awasthi16.pdf}, url = {https://proceedings.mlr.press/v49/awasthi16.html}, abstract = {We study the \emphapproximate recovery problem: Given corrupted 1-bit measurements of the form sign(w^* ⋅x_i), recover a vector w that is a good approximation to w^* ∈\Re^d. This problem has been studied by both the learning theory and signal processing communities. In learning theory, this is known as the problem of \emphlearning halfspaces with noise, and in signal processing, as \emph1-bit compressed sensing, in which there is an additional assumption that w^* is t-sparse. The challenge in both cases is to design computationally efficient algorithms that are tolerant to large amounts of noise under realistic noise models. Furthermore, in the case of 1-bit compressed sensing, we require the number of measurements x_i to scale polynomially in t and only polylogarithmically in d, the ambient dimension. In this work, we introduce algorithms with nearly optimal guarantees for both problems under two realistic noise models, \emphbounded (Massart) noise and \emphadversarial (agnostic) noise, when the measurements x_i’s are drawn from any isotropic log-concave distribution. In bounded (Massart) noise, an adversary can flip the measurement of each point x with probability η(x)≤η< 1/2. For this problem, we present an efficient algorithm that returns w such that \|w- w^*\|_2 ≤εin time poly(d, \frac 1 ε) for \emphany constant η< 1/2. This improves significantly over the best known result of Awasthi et al. 2015, in this space that required the noise to be as small as η≈10^-6. We then introduce an attribute-efficient variant of this algorithm for 1-bit compressed sensing that achieves the same guarantee with poly(t, \log(d), \frac 1 ε) measurements when \|w^*\|_0≤t. For adversarial (agnostic) noise, where any νfraction of measurements can be corrupted, we provide an algorithm that returns w such that \|w-w^*\|_2 ≤O(ν) + ε, with \tildeΩ( \frac t ε^3 \polylog(d)) measurements. Our results improve on the best known approximation results in this space and under some regimes improve on the sample complexity of the existing results. Furthermore, this is the first result of its kind in 1-bit compressed sensing that goes beyond the Gaussian marginal distribution and works for any isotrpic log-concave distribution.} }
Endnote
%0 Conference Paper %T Learning and 1-bit Compressed Sensing under Asymmetric Noise %A Pranjal Awasthi %A Maria-Florina Balcan %A Nika Haghtalab %A Hongyang Zhang %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-awasthi16 %I PMLR %P 152--192 %U https://proceedings.mlr.press/v49/awasthi16.html %V 49 %X We study the \emphapproximate recovery problem: Given corrupted 1-bit measurements of the form sign(w^* ⋅x_i), recover a vector w that is a good approximation to w^* ∈\Re^d. This problem has been studied by both the learning theory and signal processing communities. In learning theory, this is known as the problem of \emphlearning halfspaces with noise, and in signal processing, as \emph1-bit compressed sensing, in which there is an additional assumption that w^* is t-sparse. The challenge in both cases is to design computationally efficient algorithms that are tolerant to large amounts of noise under realistic noise models. Furthermore, in the case of 1-bit compressed sensing, we require the number of measurements x_i to scale polynomially in t and only polylogarithmically in d, the ambient dimension. In this work, we introduce algorithms with nearly optimal guarantees for both problems under two realistic noise models, \emphbounded (Massart) noise and \emphadversarial (agnostic) noise, when the measurements x_i’s are drawn from any isotropic log-concave distribution. In bounded (Massart) noise, an adversary can flip the measurement of each point x with probability η(x)≤η< 1/2. For this problem, we present an efficient algorithm that returns w such that \|w- w^*\|_2 ≤εin time poly(d, \frac 1 ε) for \emphany constant η< 1/2. This improves significantly over the best known result of Awasthi et al. 2015, in this space that required the noise to be as small as η≈10^-6. We then introduce an attribute-efficient variant of this algorithm for 1-bit compressed sensing that achieves the same guarantee with poly(t, \log(d), \frac 1 ε) measurements when \|w^*\|_0≤t. For adversarial (agnostic) noise, where any νfraction of measurements can be corrupted, we provide an algorithm that returns w such that \|w-w^*\|_2 ≤O(ν) + ε, with \tildeΩ( \frac t ε^3 \polylog(d)) measurements. Our results improve on the best known approximation results in this space and under some regimes improve on the sample complexity of the existing results. Furthermore, this is the first result of its kind in 1-bit compressed sensing that goes beyond the Gaussian marginal distribution and works for any isotrpic log-concave distribution.
RIS
TY - CPAPER TI - Learning and 1-bit Compressed Sensing under Asymmetric Noise AU - Pranjal Awasthi AU - Maria-Florina Balcan AU - Nika Haghtalab AU - Hongyang Zhang BT - 29th Annual Conference on Learning Theory DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-awasthi16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 49 SP - 152 EP - 192 L1 - http://proceedings.mlr.press/v49/awasthi16.pdf UR - https://proceedings.mlr.press/v49/awasthi16.html AB - We study the \emphapproximate recovery problem: Given corrupted 1-bit measurements of the form sign(w^* ⋅x_i), recover a vector w that is a good approximation to w^* ∈\Re^d. This problem has been studied by both the learning theory and signal processing communities. In learning theory, this is known as the problem of \emphlearning halfspaces with noise, and in signal processing, as \emph1-bit compressed sensing, in which there is an additional assumption that w^* is t-sparse. The challenge in both cases is to design computationally efficient algorithms that are tolerant to large amounts of noise under realistic noise models. Furthermore, in the case of 1-bit compressed sensing, we require the number of measurements x_i to scale polynomially in t and only polylogarithmically in d, the ambient dimension. In this work, we introduce algorithms with nearly optimal guarantees for both problems under two realistic noise models, \emphbounded (Massart) noise and \emphadversarial (agnostic) noise, when the measurements x_i’s are drawn from any isotropic log-concave distribution. In bounded (Massart) noise, an adversary can flip the measurement of each point x with probability η(x)≤η< 1/2. For this problem, we present an efficient algorithm that returns w such that \|w- w^*\|_2 ≤εin time poly(d, \frac 1 ε) for \emphany constant η< 1/2. This improves significantly over the best known result of Awasthi et al. 2015, in this space that required the noise to be as small as η≈10^-6. We then introduce an attribute-efficient variant of this algorithm for 1-bit compressed sensing that achieves the same guarantee with poly(t, \log(d), \frac 1 ε) measurements when \|w^*\|_0≤t. For adversarial (agnostic) noise, where any νfraction of measurements can be corrupted, we provide an algorithm that returns w such that \|w-w^*\|_2 ≤O(ν) + ε, with \tildeΩ( \frac t ε^3 \polylog(d)) measurements. Our results improve on the best known approximation results in this space and under some regimes improve on the sample complexity of the existing results. Furthermore, this is the first result of its kind in 1-bit compressed sensing that goes beyond the Gaussian marginal distribution and works for any isotrpic log-concave distribution. ER -
APA
Awasthi, P., Balcan, M., Haghtalab, N. & Zhang, H.. (2016). Learning and 1-bit Compressed Sensing under Asymmetric Noise. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:152-192 Available from https://proceedings.mlr.press/v49/awasthi16.html.

Related Material