Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank

Kishan Wimalawarne, Taiji Suzuki
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:1117-1132, 2023.

Abstract

We investigate adaptive layer-wise graph convolution in deep GCN models. We propose AdaGPR to learn generalized Pageranks at each layer of a GCNII network to induce adaptive convolution. We show that the generalization bound for AdaGPR is bounded by a polynomial of the eigenvalue spectrum of the normalized adjacency matrix in the order of the number of generalized Pagerank coefficients. By analysing the generalization bounds we show that oversmoothing depends on both the convolutions by the higher orders of the normalized adjacency matrix and the depth of the model. We performed evaluations on node-classification using benchmark real data and show that AdaGPR provides improved accuracies compared to existing graph convolution networks while demonstrating robustness against oversmoothing. Further, we demonstrate that analysis of coefficients of layer-wise generalized Pageranks allows us to qualitatively understand convolution at each layer enabling model interpretations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-wimalawarne23a, title = {Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank}, author = {Wimalawarne, Kishan and Suzuki, Taiji}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {1117--1132}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/wimalawarne23a/wimalawarne23a.pdf}, url = {https://proceedings.mlr.press/v189/wimalawarne23a.html}, abstract = { We investigate adaptive layer-wise graph convolution in deep GCN models. We propose AdaGPR to learn generalized Pageranks at each layer of a GCNII network to induce adaptive convolution. We show that the generalization bound for AdaGPR is bounded by a polynomial of the eigenvalue spectrum of the normalized adjacency matrix in the order of the number of generalized Pagerank coefficients. By analysing the generalization bounds we show that oversmoothing depends on both the convolutions by the higher orders of the normalized adjacency matrix and the depth of the model. We performed evaluations on node-classification using benchmark real data and show that AdaGPR provides improved accuracies compared to existing graph convolution networks while demonstrating robustness against oversmoothing. Further, we demonstrate that analysis of coefficients of layer-wise generalized Pageranks allows us to qualitatively understand convolution at each layer enabling model interpretations.} }
Endnote
%0 Conference Paper %T Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank %A Kishan Wimalawarne %A Taiji Suzuki %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-wimalawarne23a %I PMLR %P 1117--1132 %U https://proceedings.mlr.press/v189/wimalawarne23a.html %V 189 %X We investigate adaptive layer-wise graph convolution in deep GCN models. We propose AdaGPR to learn generalized Pageranks at each layer of a GCNII network to induce adaptive convolution. We show that the generalization bound for AdaGPR is bounded by a polynomial of the eigenvalue spectrum of the normalized adjacency matrix in the order of the number of generalized Pagerank coefficients. By analysing the generalization bounds we show that oversmoothing depends on both the convolutions by the higher orders of the normalized adjacency matrix and the depth of the model. We performed evaluations on node-classification using benchmark real data and show that AdaGPR provides improved accuracies compared to existing graph convolution networks while demonstrating robustness against oversmoothing. Further, we demonstrate that analysis of coefficients of layer-wise generalized Pageranks allows us to qualitatively understand convolution at each layer enabling model interpretations.
APA
Wimalawarne, K. & Suzuki, T.. (2023). Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:1117-1132 Available from https://proceedings.mlr.press/v189/wimalawarne23a.html.

Related Material