[edit]
Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:1117-1132, 2023.
Abstract
We investigate adaptive layer-wise graph
convolution in deep GCN models. We propose AdaGPR to
learn generalized Pageranks at each layer of a GCNII
network to induce adaptive convolution. We show that
the generalization bound for AdaGPR is bounded by a
polynomial of the eigenvalue spectrum of the
normalized adjacency matrix in the order of the
number of generalized Pagerank coefficients. By
analysing the generalization bounds we show that
oversmoothing depends on both the convolutions by
the higher orders of the normalized adjacency matrix
and the depth of the model. We performed
evaluations on node-classification using benchmark
real data and show that AdaGPR provides improved
accuracies compared to existing graph convolution
networks while demonstrating robustness against
oversmoothing. Further, we demonstrate that analysis
of coefficients of layer-wise generalized Pageranks
allows us to qualitatively understand convolution at
each layer enabling model interpretations.