[edit]
Learning and 1-bit Compressed Sensing under Asymmetric Noise
29th Annual Conference on Learning Theory, PMLR 49:152-192, 2016.
Abstract
We study the \emphapproximate recovery problem: Given corrupted 1-bit measurements of the form sign(w^* ⋅x_i), recover a vector w that is a good approximation to w^* ∈\Re^d. This problem has been studied by both the learning theory and signal processing communities. In learning theory, this is known as the problem of \emphlearning halfspaces with noise, and in signal processing, as \emph1-bit compressed sensing, in which there is an additional assumption that w^* is t-sparse. The challenge in both cases is to design computationally efficient algorithms that are tolerant to large amounts of noise under realistic noise models. Furthermore, in the case of 1-bit compressed sensing, we require the number of measurements x_i to scale polynomially in t and only polylogarithmically in d, the ambient dimension. In this work, we introduce algorithms with nearly optimal guarantees for both problems under two realistic noise models, \emphbounded (Massart) noise and \emphadversarial (agnostic) noise, when the measurements x_i’s are drawn from any isotropic log-concave distribution. In bounded (Massart) noise, an adversary can flip the measurement of each point x with probability η(x)≤η< 1/2. For this problem, we present an efficient algorithm that returns w such that \|w- w^*\|_2 ≤εin time poly(d, \frac 1 ε) for \emphany constant η< 1/2. This improves significantly over the best known result of Awasthi et al. 2015, in this space that required the noise to be as small as η≈10^-6. We then introduce an attribute-efficient variant of this algorithm for 1-bit compressed sensing that achieves the same guarantee with poly(t, \log(d), \frac 1 ε) measurements when \|w^*\|_0≤t. For adversarial (agnostic) noise, where any νfraction of measurements can be corrupted, we provide an algorithm that returns w such that \|w-w^*\|_2 ≤O(ν) + ε, with \tildeΩ( \frac t ε^3 \polylog(d)) measurements. Our results improve on the best known approximation results in this space and under some regimes improve on the sample complexity of the existing results. Furthermore, this is the first result of its kind in 1-bit compressed sensing that goes beyond the Gaussian marginal distribution and works for any isotrpic log-concave distribution.