[edit]
Algorithms for Sparse LPN and LSPN Against Low-noise (extended abstract)
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:1091-1093, 2025.
Abstract
We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN, this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let $n$ be the dimension, $k$ denote the sparsity, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. As a fundamental problem in computational learning theory, Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is $k$-sparse instead of a potentially dense vector. While the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time, previously known results stills need at least ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for any noise rate $\eta$. Our framework provides a LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any noise rate $\eta$, which improves the state-of-the-art of LSPN whenever $\eta \in (k/n,\sqrt{k/n})$. The sparse LPN problem is closely related to the classical problem of refuting random $k$-CSP and has been widely used in cryptography as the hardness assumption. Different from the standard LPN that samples random vectors in $\mathbf{F}_2^n$, it samples random $k$-sparse vectors. Because the number of $k$-sparse vectors is ${n \choose k}n^{k/2}$. However, much less is known about learning algorithms for a constant $k$ like 3 and $m