Algorithms for Sparse LPN and LSPN Against Low-noise (extended abstract)

Xue Chen, Wenxuan Shu, Zhaienhe Zhou
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:1091-1093, 2025.

Abstract

We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN, this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let $n$ be the dimension, $k$ denote the sparsity, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. As a fundamental problem in computational learning theory, Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is $k$-sparse instead of a potentially dense vector. While the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time, previously known results stills need at least ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for any noise rate $\eta$. Our framework provides a LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any noise rate $\eta$, which improves the state-of-the-art of LSPN whenever $\eta \in (k/n,\sqrt{k/n})$. The sparse LPN problem is closely related to the classical problem of refuting random $k$-CSP and has been widely used in cryptography as the hardness assumption. Different from the standard LPN that samples random vectors in $\mathbf{F}_2^n$, it samples random $k$-sparse vectors. Because the number of $k$-sparse vectors is ${n \choose k}n^{k/2}$. However, much less is known about learning algorithms for a constant $k$ like 3 and $m

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-chen25f, title = {Algorithms for Sparse LPN and LSPN Against Low-noise (extended abstract)}, author = {Chen, Xue and Shu, Wenxuan and Zhou, Zhaienhe}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {1091--1093}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/chen25f/chen25f.pdf}, url = {https://proceedings.mlr.press/v291/chen25f.html}, abstract = {We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN, this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let $n$ be the dimension, $k$ denote the sparsity, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. As a fundamental problem in computational learning theory, Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is $k$-sparse instead of a potentially dense vector. While the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time, previously known results stills need at least ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for any noise rate $\eta$. Our framework provides a LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any noise rate $\eta$, which improves the state-of-the-art of LSPN whenever $\eta \in (k/n,\sqrt{k/n})$. The sparse LPN problem is closely related to the classical problem of refuting random $k$-CSP and has been widely used in cryptography as the hardness assumption. Different from the standard LPN that samples random vectors in $\mathbf{F}_2^n$, it samples random $k$-sparse vectors. Because the number of $k$-sparse vectors is ${n \choose k}n^{k/2}$. However, much less is known about learning algorithms for a constant $k$ like 3 and $m
Endnote
%0 Conference Paper %T Algorithms for Sparse LPN and LSPN Against Low-noise (extended abstract) %A Xue Chen %A Wenxuan Shu %A Zhaienhe Zhou %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-chen25f %I PMLR %P 1091--1093 %U https://proceedings.mlr.press/v291/chen25f.html %V 291 %X We consider sparse variants of the classical Learning Parities with random Noise (LPN) problem. Our main contribution is a new algorithmic framework that provides learning algorithms against low-noise for both Learning Sparse Parities (LSPN) problem and sparse LPN problem. Different from previous approaches for LSPN and sparse LPN, this framework has a simple structure without fast matrix multiplication or tensor methods such that its algorithms are easy to implement and run in polynomial space. Let $n$ be the dimension, $k$ denote the sparsity, and $\eta$ be the noise rate such that each label gets flipped with probability $\eta$. As a fundamental problem in computational learning theory, Learning Sparse Parities with Noise (LSPN) assumes the hidden parity is $k$-sparse instead of a potentially dense vector. While the simple enumeration algorithm takes ${n \choose k}=O(n/k)^k$ time, previously known results stills need at least ${n \choose k/2} = \Omega(n/k)^{k/2}$ time for any noise rate $\eta$. Our framework provides a LSPN algorithm runs in time $O(\eta \cdot n/k)^k$ for any noise rate $\eta$, which improves the state-of-the-art of LSPN whenever $\eta \in (k/n,\sqrt{k/n})$. The sparse LPN problem is closely related to the classical problem of refuting random $k$-CSP and has been widely used in cryptography as the hardness assumption. Different from the standard LPN that samples random vectors in $\mathbf{F}_2^n$, it samples random $k$-sparse vectors. Because the number of $k$-sparse vectors is ${n \choose k}n^{k/2}$. However, much less is known about learning algorithms for a constant $k$ like 3 and $m
APA
Chen, X., Shu, W. & Zhou, Z.. (2025). Algorithms for Sparse LPN and LSPN Against Low-noise (extended abstract). Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:1091-1093 Available from https://proceedings.mlr.press/v291/chen25f.html.

Related Material