GRAND: Graph Neural Diffusion

Ben Chamberlain, James Rowbottom, Maria I Gorinova, Michael Bronstein, Stefan Webb, Emanuele Rossi
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:1407-1418, 2021.

Abstract

We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-chamberlain21a, title = {GRAND: Graph Neural Diffusion}, author = {Chamberlain, Ben and Rowbottom, James and Gorinova, Maria I and Bronstein, Michael and Webb, Stefan and Rossi, Emanuele}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {1407--1418}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/chamberlain21a/chamberlain21a.pdf}, url = {https://proceedings.mlr.press/v139/chamberlain21a.html}, abstract = {We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks.} }
Endnote
%0 Conference Paper %T GRAND: Graph Neural Diffusion %A Ben Chamberlain %A James Rowbottom %A Maria I Gorinova %A Michael Bronstein %A Stefan Webb %A Emanuele Rossi %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-chamberlain21a %I PMLR %P 1407--1418 %U https://proceedings.mlr.press/v139/chamberlain21a.html %V 139 %X We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks.
APA
Chamberlain, B., Rowbottom, J., Gorinova, M.I., Bronstein, M., Webb, S. & Rossi, E.. (2021). GRAND: Graph Neural Diffusion. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:1407-1418 Available from https://proceedings.mlr.press/v139/chamberlain21a.html.

Related Material