Sharp thresholds in inference of planted subgraphs

Elchanan Mossel, Jonathan Niles-Weed, Youngtak Sohn, Nike Sun, Ilias Zadik
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5573-5577, 2023.

Abstract

We connect the study of phase transitions in high-dimensional statistical inference to the study of threshold phenomena in random graphs. A major question in the study of the Erdős–Rényi random graph $G(n,p)$ is to understand the probability, as a function of $p$, that $G(n,p)$ contains a given subgraph $H=H_n$. This was studied for many specific examples of $H$, starting with classical work of Erdős and Rényi (1960). More recent work studies this question for general $H$, both in building a general theory of sharp versus coarse transitions (Friedgut and Bourgain 1999; Hatami, 2012) and in results on the location of the transition (Kahn and Kalai, 2007; Talagrand, 2010; Frankston, Kahn, Narayanan, Park, 2019; Park and Pham, 2022).In inference problems, one often studies the optimal accuracy of inference as a function of the amount of noise. In a variety of sparse recovery problems, an “all-or-nothing (AoN) phenomenon” has been observed: Informally, as the amount of noise is gradually increased, at some critical threshold the inference problem undergoes a sharp jump from near-perfect recovery to near-zero accuracy (Gamarnik and Zadik, 2017; Reeves, Xu, Zadik, 2021). We can regard AoN as the natural inference analogue of the sharp threshold phenomenon in random graphs. In contrast with the general theorydeveloped for sharp thresholds of random graph properties, the AoNphenomenon has only been studied so far in specific inference settings, anda general theory behind its appearance remains elusive.In this paper we study the general problem of inferring a graph $H=H_n$ planted in an Erdős–Rényi random graph, thus naturally connecting the two lines of research mentioned above. We show that questions of AoN are closely connected to first moment thresholds, and to a generalization of the so-called Kahn–Kalai expectation threshold that scans over subgraphs of $H$ of edge density at least $q$. In a variety of settings we characterize AoN, by showing that AoN occurs \emph{if and only if} this “generalized expectation threshold” is roughly constant in $q$. Our proofs combine techniques from random graph theory and Bayesian inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-mossel23a, title = {Sharp thresholds in inference of planted subgraphs}, author = {Mossel, Elchanan and Niles-Weed, Jonathan and Sohn, Youngtak and Sun, Nike and Zadik, Ilias}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5573--5577}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/mossel23a/mossel23a.pdf}, url = {https://proceedings.mlr.press/v195/mossel23a.html}, abstract = {We connect the study of phase transitions in high-dimensional statistical inference to the study of threshold phenomena in random graphs. A major question in the study of the Erdős–Rényi random graph $G(n,p)$ is to understand the probability, as a function of $p$, that $G(n,p)$ contains a given subgraph $H=H_n$. This was studied for many specific examples of $H$, starting with classical work of Erdős and Rényi (1960). More recent work studies this question for general $H$, both in building a general theory of sharp versus coarse transitions (Friedgut and Bourgain 1999; Hatami, 2012) and in results on the location of the transition (Kahn and Kalai, 2007; Talagrand, 2010; Frankston, Kahn, Narayanan, Park, 2019; Park and Pham, 2022).In inference problems, one often studies the optimal accuracy of inference as a function of the amount of noise. In a variety of sparse recovery problems, an “all-or-nothing (AoN) phenomenon” has been observed: Informally, as the amount of noise is gradually increased, at some critical threshold the inference problem undergoes a sharp jump from near-perfect recovery to near-zero accuracy (Gamarnik and Zadik, 2017; Reeves, Xu, Zadik, 2021). We can regard AoN as the natural inference analogue of the sharp threshold phenomenon in random graphs. In contrast with the general theorydeveloped for sharp thresholds of random graph properties, the AoNphenomenon has only been studied so far in specific inference settings, anda general theory behind its appearance remains elusive.In this paper we study the general problem of inferring a graph $H=H_n$ planted in an Erdős–Rényi random graph, thus naturally connecting the two lines of research mentioned above. We show that questions of AoN are closely connected to first moment thresholds, and to a generalization of the so-called Kahn–Kalai expectation threshold that scans over subgraphs of $H$ of edge density at least $q$. In a variety of settings we characterize AoN, by showing that AoN occurs \emph{if and only if} this “generalized expectation threshold” is roughly constant in $q$. Our proofs combine techniques from random graph theory and Bayesian inference.} }
Endnote
%0 Conference Paper %T Sharp thresholds in inference of planted subgraphs %A Elchanan Mossel %A Jonathan Niles-Weed %A Youngtak Sohn %A Nike Sun %A Ilias Zadik %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-mossel23a %I PMLR %P 5573--5577 %U https://proceedings.mlr.press/v195/mossel23a.html %V 195 %X We connect the study of phase transitions in high-dimensional statistical inference to the study of threshold phenomena in random graphs. A major question in the study of the Erdős–Rényi random graph $G(n,p)$ is to understand the probability, as a function of $p$, that $G(n,p)$ contains a given subgraph $H=H_n$. This was studied for many specific examples of $H$, starting with classical work of Erdős and Rényi (1960). More recent work studies this question for general $H$, both in building a general theory of sharp versus coarse transitions (Friedgut and Bourgain 1999; Hatami, 2012) and in results on the location of the transition (Kahn and Kalai, 2007; Talagrand, 2010; Frankston, Kahn, Narayanan, Park, 2019; Park and Pham, 2022).In inference problems, one often studies the optimal accuracy of inference as a function of the amount of noise. In a variety of sparse recovery problems, an “all-or-nothing (AoN) phenomenon” has been observed: Informally, as the amount of noise is gradually increased, at some critical threshold the inference problem undergoes a sharp jump from near-perfect recovery to near-zero accuracy (Gamarnik and Zadik, 2017; Reeves, Xu, Zadik, 2021). We can regard AoN as the natural inference analogue of the sharp threshold phenomenon in random graphs. In contrast with the general theorydeveloped for sharp thresholds of random graph properties, the AoNphenomenon has only been studied so far in specific inference settings, anda general theory behind its appearance remains elusive.In this paper we study the general problem of inferring a graph $H=H_n$ planted in an Erdős–Rényi random graph, thus naturally connecting the two lines of research mentioned above. We show that questions of AoN are closely connected to first moment thresholds, and to a generalization of the so-called Kahn–Kalai expectation threshold that scans over subgraphs of $H$ of edge density at least $q$. In a variety of settings we characterize AoN, by showing that AoN occurs \emph{if and only if} this “generalized expectation threshold” is roughly constant in $q$. Our proofs combine techniques from random graph theory and Bayesian inference.
APA
Mossel, E., Niles-Weed, J., Sohn, Y., Sun, N. & Zadik, I.. (2023). Sharp thresholds in inference of planted subgraphs. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5573-5577 Available from https://proceedings.mlr.press/v195/mossel23a.html.

Related Material