Information-theoretic thresholds for community detection in sparse networks

Jess Banks, Cristopher Moore, Joe Neeman, Praneeth Netrapalli

29th Annual Conference on Learning Theory, PMLR 49:383-416, 2016.

Abstract

We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, consider a symmetric stochastic block model with q groups, average degree d, and connection probabilities c_\mathrmin/n and c_\mathrmout/n for within-group and between-group edges respectively; let λ= (c_\mathrmin-c_\mathrmout)/(qd). We show that, when q is large, and λ= O(1/q), the critical value of d at which community detection becomes possible—in physical terms, the condensation threshold—is $ d_\mathrmc = Θ\left( \frac\log qq λ^2 \right) , with tighter results in certain regimes. Above this threshold, we show that any partition of the nodes into q groups which is as ‘good’ as the planted one, in terms of the number of within- and between-group edges, is correlated with it. This gives an exponential-time algorithm that performs better than chance; specifically, community detection becomes possible below the Kesten-Stigum bound for q \ge 5 in the disassortative case λ< 0, and for q \ge 11 in the assortative case λ> 0 (similar upper bounds were obtained independently by Abbe and Sandon). Conversely, below this threshold, we show that no algorithm can label the vertices better than chance, or even distinguish the block model from an Erdős-Rényi random graph with high probability. Our lower bound on d_\mathrmc uses Robinson and Wormald’s small subgraph conditioning method, and we also give (less explicit) results for non-symmetric stochastic block models. In the symmetric case, we obtain explicit results by using bounds on certain functions of doubly stochastic matrices due to Achlioptas and Naor; indeed, our lower bound on d_\mathrmc is their second moment lower bound on the q$-colorability threshold for random graphs with a certain effective degree.

Cite this Paper

BibTeX

@InProceedings{pmlr-v49-banks16,
title = {Information-theoretic thresholds for community detection in sparse networks},
author = {Banks, Jess and Moore, Cristopher and Neeman, Joe and Netrapalli, Praneeth},
booktitle = {29th Annual Conference on Learning Theory},
pages = {383--416},
year = {2016},
editor = {Feldman, Vitaly and Rakhlin, Alexander and Shamir, Ohad},
volume = {49},
series = {Proceedings of Machine Learning Research},
address = {Columbia University, New York, New York, USA},
month = {23--26 Jun},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v49/banks16.pdf},
url = {https://proceedings.mlr.press/v49/banks16.html},
abstract = {We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, consider a symmetric stochastic block model with q groups, average degree d, and connection probabilities c_\mathrmin/n and c_\mathrmout/n for within-group and between-group edges respectively; let λ= (c_\mathrmin-c_\mathrmout)/(qd). We show that, when q is large, and λ= O(1/q), the critical value of d at which community detection becomes possible—in physical terms, the condensation threshold—is $ d_\mathrmc = Θ\left( \frac\log qq λ^2 \right) , with tighter results in certain regimes. Above this threshold, we show that any partition of the nodes into q groups which is as ‘good’ as the planted one, in terms of the number of within- and between-group edges, is correlated with it. This gives an exponential-time algorithm that performs better than chance; specifically, community detection becomes possible below the Kesten-Stigum bound for q \ge 5 in the disassortative case λ< 0, and for q \ge 11 in the assortative case λ> 0 (similar upper bounds were obtained independently by Abbe and Sandon). Conversely, below this threshold, we show that no algorithm can label the vertices better than chance, or even distinguish the block model from an Erdős-Rényi random graph with high probability. Our lower bound on d_\mathrmc uses Robinson and Wormald’s small subgraph conditioning method, and we also give (less explicit) results for non-symmetric stochastic block models. In the symmetric case, we obtain explicit results by using bounds on certain functions of doubly stochastic matrices due to Achlioptas and Naor; indeed, our lower bound on d_\mathrmc is their second moment lower bound on the q$-colorability threshold for random graphs with a certain effective degree.}
}

Endnote

%0 Conference Paper
%T Information-theoretic thresholds for community detection in sparse networks
%A Jess Banks
%A Cristopher Moore
%A Joe Neeman
%A Praneeth Netrapalli
%B 29th Annual Conference on Learning Theory
%C Proceedings of Machine Learning Research
%D 2016
%E Vitaly Feldman
%E Alexander Rakhlin
%E Ohad Shamir
%F pmlr-v49-banks16
%I PMLR
%P 383--416
%U https://proceedings.mlr.press/v49/banks16.html
%V 49
%X We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, consider a symmetric stochastic block model with q groups, average degree d, and connection probabilities c_\mathrmin/n and c_\mathrmout/n for within-group and between-group edges respectively; let λ= (c_\mathrmin-c_\mathrmout)/(qd). We show that, when q is large, and λ= O(1/q), the critical value of d at which community detection becomes possible—in physical terms, the condensation threshold—is $ d_\mathrmc = Θ\left( \frac\log qq λ^2 \right) , with tighter results in certain regimes. Above this threshold, we show that any partition of the nodes into q groups which is as ‘good’ as the planted one, in terms of the number of within- and between-group edges, is correlated with it. This gives an exponential-time algorithm that performs better than chance; specifically, community detection becomes possible below the Kesten-Stigum bound for q \ge 5 in the disassortative case λ< 0, and for q \ge 11 in the assortative case λ> 0 (similar upper bounds were obtained independently by Abbe and Sandon). Conversely, below this threshold, we show that no algorithm can label the vertices better than chance, or even distinguish the block model from an Erdős-Rényi random graph with high probability. Our lower bound on d_\mathrmc uses Robinson and Wormald’s small subgraph conditioning method, and we also give (less explicit) results for non-symmetric stochastic block models. In the symmetric case, we obtain explicit results by using bounds on certain functions of doubly stochastic matrices due to Achlioptas and Naor; indeed, our lower bound on d_\mathrmc is their second moment lower bound on the q$-colorability threshold for random graphs with a certain effective degree.

RIS

TY - CPAPER
TI - Information-theoretic thresholds for community detection in sparse networks
AU - Jess Banks
AU - Cristopher Moore
AU - Joe Neeman
AU - Praneeth Netrapalli
BT - 29th Annual Conference on Learning Theory
DA - 2016/06/06
ED - Vitaly Feldman
ED - Alexander Rakhlin
ED - Ohad Shamir
ID - pmlr-v49-banks16
PB - PMLR
DP - Proceedings of Machine Learning Research
VL - 49
SP - 383
EP - 416
L1 - http://proceedings.mlr.press/v49/banks16.pdf
UR - https://proceedings.mlr.press/v49/banks16.html
AB - We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, consider a symmetric stochastic block model with q groups, average degree d, and connection probabilities c_\mathrmin/n and c_\mathrmout/n for within-group and between-group edges respectively; let λ= (c_\mathrmin-c_\mathrmout)/(qd). We show that, when q is large, and λ= O(1/q), the critical value of d at which community detection becomes possible—in physical terms, the condensation threshold—is $ d_\mathrmc = Θ\left( \frac\log qq λ^2 \right) , with tighter results in certain regimes. Above this threshold, we show that any partition of the nodes into q groups which is as ‘good’ as the planted one, in terms of the number of within- and between-group edges, is correlated with it. This gives an exponential-time algorithm that performs better than chance; specifically, community detection becomes possible below the Kesten-Stigum bound for q \ge 5 in the disassortative case λ< 0, and for q \ge 11 in the assortative case λ> 0 (similar upper bounds were obtained independently by Abbe and Sandon). Conversely, below this threshold, we show that no algorithm can label the vertices better than chance, or even distinguish the block model from an Erdős-Rényi random graph with high probability. Our lower bound on d_\mathrmc uses Robinson and Wormald’s small subgraph conditioning method, and we also give (less explicit) results for non-symmetric stochastic block models. In the symmetric case, we obtain explicit results by using bounds on certain functions of doubly stochastic matrices due to Achlioptas and Naor; indeed, our lower bound on d_\mathrmc is their second moment lower bound on the q$-colorability threshold for random graphs with a certain effective degree.
ER -

APA

Banks, J., Moore, C., Neeman, J. & Netrapalli, P.. (2016). Information-theoretic thresholds for community detection in sparse networks. 29th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 49:383-416 Available from https://proceedings.mlr.press/v49/banks16.html.