Asymptotic Generalization Error of a Single-Layer Graph Convolutional Network

O Duranthon, Lenka Zdeborova
Proceedings of the Third Learning on Graphs Conference, PMLR 269:13:1-13:27, 2025.

Abstract

While graph convolutional networks show great practical promises, the theoretical understanding of their generalization properties as a function of the number of samples is still in its infancy compared to the more broadly studied case of supervised fully connected neural networks. In this article, we predict the performances of a single-layer graph convolutional network (GCN) trained on data produced by attributed stochastic block models (SBMs) in the high-dimensional limit. Previously, only ridge regression on contextual-SBM (CSBM) has been considered in Shi et al. 2022; we generalize the analysis to arbitrary convex loss and regularization for the CSBM and add the analysis for another data model, the neural-prior SBM. We derive the optimal parameters of the GCN. We also study the high signal-to-noise ratio limit, detail the convergence rates of the GCN and show that, while consistent, it does not reach the Bayes-optimal rate for any of the considered cases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v269-duranthon25a, title = {Asymptotic Generalization Error of a Single-Layer Graph Convolutional Network}, author = {Duranthon, O and Zdeborova, Lenka}, booktitle = {Proceedings of the Third Learning on Graphs Conference}, pages = {13:1--13:27}, year = {2025}, editor = {Wolf, Guy and Krishnaswamy, Smita}, volume = {269}, series = {Proceedings of Machine Learning Research}, month = {26--29 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v269/main/assets/duranthon25a/duranthon25a.pdf}, url = {https://proceedings.mlr.press/v269/duranthon25a.html}, abstract = {While graph convolutional networks show great practical promises, the theoretical understanding of their generalization properties as a function of the number of samples is still in its infancy compared to the more broadly studied case of supervised fully connected neural networks. In this article, we predict the performances of a single-layer graph convolutional network (GCN) trained on data produced by attributed stochastic block models (SBMs) in the high-dimensional limit. Previously, only ridge regression on contextual-SBM (CSBM) has been considered in Shi et al. 2022; we generalize the analysis to arbitrary convex loss and regularization for the CSBM and add the analysis for another data model, the neural-prior SBM. We derive the optimal parameters of the GCN. We also study the high signal-to-noise ratio limit, detail the convergence rates of the GCN and show that, while consistent, it does not reach the Bayes-optimal rate for any of the considered cases.} }
Endnote
%0 Conference Paper %T Asymptotic Generalization Error of a Single-Layer Graph Convolutional Network %A O Duranthon %A Lenka Zdeborova %B Proceedings of the Third Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2025 %E Guy Wolf %E Smita Krishnaswamy %F pmlr-v269-duranthon25a %I PMLR %P 13:1--13:27 %U https://proceedings.mlr.press/v269/duranthon25a.html %V 269 %X While graph convolutional networks show great practical promises, the theoretical understanding of their generalization properties as a function of the number of samples is still in its infancy compared to the more broadly studied case of supervised fully connected neural networks. In this article, we predict the performances of a single-layer graph convolutional network (GCN) trained on data produced by attributed stochastic block models (SBMs) in the high-dimensional limit. Previously, only ridge regression on contextual-SBM (CSBM) has been considered in Shi et al. 2022; we generalize the analysis to arbitrary convex loss and regularization for the CSBM and add the analysis for another data model, the neural-prior SBM. We derive the optimal parameters of the GCN. We also study the high signal-to-noise ratio limit, detail the convergence rates of the GCN and show that, while consistent, it does not reach the Bayes-optimal rate for any of the considered cases.
APA
Duranthon, O. & Zdeborova, L.. (2025). Asymptotic Generalization Error of a Single-Layer Graph Convolutional Network. Proceedings of the Third Learning on Graphs Conference, in Proceedings of Machine Learning Research 269:13:1-13:27 Available from https://proceedings.mlr.press/v269/duranthon25a.html.

Related Material