SpeqNets: Sparsity-aware permutation-equivariant graph networks

Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:16017-16042, 2022.

Abstract

While message-passing graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs. They either operate on $k$-order tensors or consider all $k$-node subgraphs, implying an exponential dependence on $k$ in memory requirements, and do not adapt to the sparsity of the graph. By introducing new heuristics for the graph isomorphism problem, we devise a class of universal, permutation-equivariant graph networks, which, unlike previous architectures, offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph. These architectures lead to vastly reduced computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime while significantly improving standard graph neural network and graph kernel architectures in terms of predictive performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-morris22a, title = {{S}peq{N}ets: Sparsity-aware permutation-equivariant graph networks}, author = {Morris, Christopher and Rattan, Gaurav and Kiefer, Sandra and Ravanbakhsh, Siamak}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {16017--16042}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/morris22a/morris22a.pdf}, url = {https://proceedings.mlr.press/v162/morris22a.html}, abstract = {While message-passing graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs. They either operate on $k$-order tensors or consider all $k$-node subgraphs, implying an exponential dependence on $k$ in memory requirements, and do not adapt to the sparsity of the graph. By introducing new heuristics for the graph isomorphism problem, we devise a class of universal, permutation-equivariant graph networks, which, unlike previous architectures, offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph. These architectures lead to vastly reduced computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime while significantly improving standard graph neural network and graph kernel architectures in terms of predictive performance.} }
Endnote
%0 Conference Paper %T SpeqNets: Sparsity-aware permutation-equivariant graph networks %A Christopher Morris %A Gaurav Rattan %A Sandra Kiefer %A Siamak Ravanbakhsh %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-morris22a %I PMLR %P 16017--16042 %U https://proceedings.mlr.press/v162/morris22a.html %V 162 %X While message-passing graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs or general relational data, more expressive, higher-order graph neural networks do not scale to large graphs. They either operate on $k$-order tensors or consider all $k$-node subgraphs, implying an exponential dependence on $k$ in memory requirements, and do not adapt to the sparsity of the graph. By introducing new heuristics for the graph isomorphism problem, we devise a class of universal, permutation-equivariant graph networks, which, unlike previous architectures, offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph. These architectures lead to vastly reduced computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime while significantly improving standard graph neural network and graph kernel architectures in terms of predictive performance.
APA
Morris, C., Rattan, G., Kiefer, S. & Ravanbakhsh, S.. (2022). SpeqNets: Sparsity-aware permutation-equivariant graph networks. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:16017-16042 Available from https://proceedings.mlr.press/v162/morris22a.html.

Related Material