Understanding the Effect of GCN Convolutions in Regression Tasks

Juntong Chen, Johannes Schmidt-Hieber, Claire Donnat, Olga Klopp
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4573-4581, 2025.

Abstract

Graph Convolutional Networks (GCNs) have become a pivotal method in machine learning for modeling functions over graphs. Despite their widespread success across various applications, their statistical properties (e.g., consistency, convergence rates) remain ill-characterized. To begin addressing this knowledge gap, we consider networks for which the graph structure implies that neighboring nodes exhibit similar signals and provide statistical theory for the impact of convolution operators. Focusing on estimators based solely on neighborhood aggregation, we examine how two common convolutions—the original GCN and GraphSAGE convolutions—affect the learning error as a function of the neighborhood topology and the number of convolutional layers. We explicitly characterize the bias variance type trade-off incurred by GCNs as a function of the neighborhood size and identify specific graph topologies where convolution operators are less effective. Our theoretical findings are corroborated by synthetic experiments, and provide a start to a deeper quantitative understanding of convolutional effects in GCNs for offering rigorous guidelines for practitioners.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-chen25j, title = {Understanding the Effect of GCN Convolutions in Regression Tasks}, author = {Chen, Juntong and Schmidt-Hieber, Johannes and Donnat, Claire and Klopp, Olga}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4573--4581}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/chen25j/chen25j.pdf}, url = {https://proceedings.mlr.press/v258/chen25j.html}, abstract = {Graph Convolutional Networks (GCNs) have become a pivotal method in machine learning for modeling functions over graphs. Despite their widespread success across various applications, their statistical properties (e.g., consistency, convergence rates) remain ill-characterized. To begin addressing this knowledge gap, we consider networks for which the graph structure implies that neighboring nodes exhibit similar signals and provide statistical theory for the impact of convolution operators. Focusing on estimators based solely on neighborhood aggregation, we examine how two common convolutions—the original GCN and GraphSAGE convolutions—affect the learning error as a function of the neighborhood topology and the number of convolutional layers. We explicitly characterize the bias variance type trade-off incurred by GCNs as a function of the neighborhood size and identify specific graph topologies where convolution operators are less effective. Our theoretical findings are corroborated by synthetic experiments, and provide a start to a deeper quantitative understanding of convolutional effects in GCNs for offering rigorous guidelines for practitioners.} }
Endnote
%0 Conference Paper %T Understanding the Effect of GCN Convolutions in Regression Tasks %A Juntong Chen %A Johannes Schmidt-Hieber %A Claire Donnat %A Olga Klopp %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-chen25j %I PMLR %P 4573--4581 %U https://proceedings.mlr.press/v258/chen25j.html %V 258 %X Graph Convolutional Networks (GCNs) have become a pivotal method in machine learning for modeling functions over graphs. Despite their widespread success across various applications, their statistical properties (e.g., consistency, convergence rates) remain ill-characterized. To begin addressing this knowledge gap, we consider networks for which the graph structure implies that neighboring nodes exhibit similar signals and provide statistical theory for the impact of convolution operators. Focusing on estimators based solely on neighborhood aggregation, we examine how two common convolutions—the original GCN and GraphSAGE convolutions—affect the learning error as a function of the neighborhood topology and the number of convolutional layers. We explicitly characterize the bias variance type trade-off incurred by GCNs as a function of the neighborhood size and identify specific graph topologies where convolution operators are less effective. Our theoretical findings are corroborated by synthetic experiments, and provide a start to a deeper quantitative understanding of convolutional effects in GCNs for offering rigorous guidelines for practitioners.
APA
Chen, J., Schmidt-Hieber, J., Donnat, C. & Klopp, O.. (2025). Understanding the Effect of GCN Convolutions in Regression Tasks. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4573-4581 Available from https://proceedings.mlr.press/v258/chen25j.html.

Related Material