Node Feature Kernels Increase Graph Convolutional Network Robustness

Mohamed El Amine Seddik, Changmin Wu, Johannes F. Lutzeyer, Michalis Vazirgiannis
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6225-6241, 2022.

Abstract

The robustness of the much used Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance. In this paper the random GCN is introduced for which a random matrix theory analysis is possible. This analysis suggests that if the graph is sufficiently perturbed, or in the extreme case random, then the GCN fails to benefit from the node features. It is furthermore observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem. An empirical study of a GCN utilised for node classification on six real datasets further confirms the theoretical findings and demonstrates that perturbations of the graph structure can result in GCNs performing significantly worse than Multi-Layer Perceptrons run on the node features alone. In practice, adding a node feature kernel to the message passing of perturbed graphs results in a significant improvement of the GCN’s performance, thereby rendering it more robust to graph perturbations. Our code is publicly available at: https://github.com/ChangminWu/RobustGCN.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-el-amine-seddik22a, title = { Node Feature Kernels Increase Graph Convolutional Network Robustness }, author = {El Amine Seddik, Mohamed and Wu, Changmin and Lutzeyer, Johannes F. and Vazirgiannis, Michalis}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {6225--6241}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/el-amine-seddik22a/el-amine-seddik22a.pdf}, url = {https://proceedings.mlr.press/v151/el-amine-seddik22a.html}, abstract = { The robustness of the much used Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance. In this paper the random GCN is introduced for which a random matrix theory analysis is possible. This analysis suggests that if the graph is sufficiently perturbed, or in the extreme case random, then the GCN fails to benefit from the node features. It is furthermore observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem. An empirical study of a GCN utilised for node classification on six real datasets further confirms the theoretical findings and demonstrates that perturbations of the graph structure can result in GCNs performing significantly worse than Multi-Layer Perceptrons run on the node features alone. In practice, adding a node feature kernel to the message passing of perturbed graphs results in a significant improvement of the GCN’s performance, thereby rendering it more robust to graph perturbations. Our code is publicly available at: https://github.com/ChangminWu/RobustGCN. } }
Endnote
%0 Conference Paper %T Node Feature Kernels Increase Graph Convolutional Network Robustness %A Mohamed El Amine Seddik %A Changmin Wu %A Johannes F. Lutzeyer %A Michalis Vazirgiannis %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-el-amine-seddik22a %I PMLR %P 6225--6241 %U https://proceedings.mlr.press/v151/el-amine-seddik22a.html %V 151 %X The robustness of the much used Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance. In this paper the random GCN is introduced for which a random matrix theory analysis is possible. This analysis suggests that if the graph is sufficiently perturbed, or in the extreme case random, then the GCN fails to benefit from the node features. It is furthermore observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem. An empirical study of a GCN utilised for node classification on six real datasets further confirms the theoretical findings and demonstrates that perturbations of the graph structure can result in GCNs performing significantly worse than Multi-Layer Perceptrons run on the node features alone. In practice, adding a node feature kernel to the message passing of perturbed graphs results in a significant improvement of the GCN’s performance, thereby rendering it more robust to graph perturbations. Our code is publicly available at: https://github.com/ChangminWu/RobustGCN.
APA
El Amine Seddik, M., Wu, C., Lutzeyer, J.F. & Vazirgiannis, M.. (2022). Node Feature Kernels Increase Graph Convolutional Network Robustness . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:6225-6241 Available from https://proceedings.mlr.press/v151/el-amine-seddik22a.html.

Related Material