Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling

Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:13014-13051, 2022.

Abstract

Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data. To address its scalability issue due to the recursive embedding of neighboring features, graph topology sampling has been proposed to reduce the memory and computational cost of training GCNs, and it has achieved comparable test performance to those without topology sampling in many empirical studies. To the best of our knowledge, this paper provides the first theoretical justification of graph topology sampling in training (up to) three-layer GCNs for semi-supervised node classification. We formally characterize some sufficient conditions on graph topology sampling such that GCN training leads to diminishing generalization error. Moreover, our method tackles the non-convex interaction of weights across layers, which is under-explored in the existing theoretical analyses of GCNs. This paper characterizes the impact of graph structures and topology sampling on the generalization performance and sample complexity explicitly, and the theoretical findings are also justified through numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-li22u, title = {Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling}, author = {Li, Hongkang and Wang, Meng and Liu, Sijia and Chen, Pin-Yu and Xiong, Jinjun}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {13014--13051}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/li22u/li22u.pdf}, url = {https://proceedings.mlr.press/v162/li22u.html}, abstract = {Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data. To address its scalability issue due to the recursive embedding of neighboring features, graph topology sampling has been proposed to reduce the memory and computational cost of training GCNs, and it has achieved comparable test performance to those without topology sampling in many empirical studies. To the best of our knowledge, this paper provides the first theoretical justification of graph topology sampling in training (up to) three-layer GCNs for semi-supervised node classification. We formally characterize some sufficient conditions on graph topology sampling such that GCN training leads to diminishing generalization error. Moreover, our method tackles the non-convex interaction of weights across layers, which is under-explored in the existing theoretical analyses of GCNs. This paper characterizes the impact of graph structures and topology sampling on the generalization performance and sample complexity explicitly, and the theoretical findings are also justified through numerical experiments.} }
Endnote
%0 Conference Paper %T Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling %A Hongkang Li %A Meng Wang %A Sijia Liu %A Pin-Yu Chen %A Jinjun Xiong %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-li22u %I PMLR %P 13014--13051 %U https://proceedings.mlr.press/v162/li22u.html %V 162 %X Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data. To address its scalability issue due to the recursive embedding of neighboring features, graph topology sampling has been proposed to reduce the memory and computational cost of training GCNs, and it has achieved comparable test performance to those without topology sampling in many empirical studies. To the best of our knowledge, this paper provides the first theoretical justification of graph topology sampling in training (up to) three-layer GCNs for semi-supervised node classification. We formally characterize some sufficient conditions on graph topology sampling such that GCN training leads to diminishing generalization error. Moreover, our method tackles the non-convex interaction of weights across layers, which is under-explored in the existing theoretical analyses of GCNs. This paper characterizes the impact of graph structures and topology sampling on the generalization performance and sample complexity explicitly, and the theoretical findings are also justified through numerical experiments.
APA
Li, H., Wang, M., Liu, S., Chen, P. & Xiong, J.. (2022). Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:13014-13051 Available from https://proceedings.mlr.press/v162/li22u.html.

Related Material