Homomorphism Counts for Graph Neural Networks: All About That Basis

Emily Jin, Michael M. Bronstein, Ismail Ilkan Ceylan, Matthias Lanzinger
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:22075-22098, 2024.

Abstract

A large body of work has investigated the properties of graph neural networks and identified several limitations, particularly pertaining to their expressive power. Their inability to count certain patterns (e.g., cycles) in a graph lies at the heart of such limitations, since many functions to be learned rely on the ability of counting such patterns. Two prominent paradigms aim to address this limitation by enriching the graph features with subgraph or homomorphism pattern counts. In this work, we show that both of these approaches are sub-optimal in a certain sense and argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the “basis” of the target pattern. This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity compared to existing approaches. We prove a series of theoretical results on node-level and graph-level motif parameters and empirically validate them on standard benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-jin24a, title = {Homomorphism Counts for Graph Neural Networks: All About That Basis}, author = {Jin, Emily and Bronstein, Michael M. and Ceylan, Ismail Ilkan and Lanzinger, Matthias}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {22075--22098}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/jin24a/jin24a.pdf}, url = {https://proceedings.mlr.press/v235/jin24a.html}, abstract = {A large body of work has investigated the properties of graph neural networks and identified several limitations, particularly pertaining to their expressive power. Their inability to count certain patterns (e.g., cycles) in a graph lies at the heart of such limitations, since many functions to be learned rely on the ability of counting such patterns. Two prominent paradigms aim to address this limitation by enriching the graph features with subgraph or homomorphism pattern counts. In this work, we show that both of these approaches are sub-optimal in a certain sense and argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the “basis” of the target pattern. This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity compared to existing approaches. We prove a series of theoretical results on node-level and graph-level motif parameters and empirically validate them on standard benchmark datasets.} }
Endnote
%0 Conference Paper %T Homomorphism Counts for Graph Neural Networks: All About That Basis %A Emily Jin %A Michael M. Bronstein %A Ismail Ilkan Ceylan %A Matthias Lanzinger %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-jin24a %I PMLR %P 22075--22098 %U https://proceedings.mlr.press/v235/jin24a.html %V 235 %X A large body of work has investigated the properties of graph neural networks and identified several limitations, particularly pertaining to their expressive power. Their inability to count certain patterns (e.g., cycles) in a graph lies at the heart of such limitations, since many functions to be learned rely on the ability of counting such patterns. Two prominent paradigms aim to address this limitation by enriching the graph features with subgraph or homomorphism pattern counts. In this work, we show that both of these approaches are sub-optimal in a certain sense and argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the “basis” of the target pattern. This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity compared to existing approaches. We prove a series of theoretical results on node-level and graph-level motif parameters and empirically validate them on standard benchmark datasets.
APA
Jin, E., Bronstein, M.M., Ceylan, I.I. & Lanzinger, M.. (2024). Homomorphism Counts for Graph Neural Networks: All About That Basis. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:22075-22098 Available from https://proceedings.mlr.press/v235/jin24a.html.

Related Material