Beyond Message Passing: Neural Graph Pattern Machine

Zehong Wang, Zheyuan Zhang, Tianyi Ma, Nitesh V Chawla, Chuxu Zhang, Yanfang Ye
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:65496-65517, 2025.

Abstract

Graph learning tasks often hinge on identifying key substructure patterns—such as triadic closures in social networks or benzene rings in molecular graphs—that underpin downstream performance. However, most existing graph neural networks (GNNs) rely on message passing, which aggregates local neighborhood information iteratively and struggles to explicitly capture such fundamental motifs, like triangles, $k$-cliques, and rings. This limitation hinders both expressiveness and long-range dependency modeling. In this paper, we introduce the Neural Graph Pattern Machine (GPM), a novel framework that bypasses message passing by learning directly from graph substructures. GPM efficiently extracts, encodes, and prioritizes task-relevant graph patterns, offering greater expressivity and improved ability to capture long-range dependencies. Empirical evaluations across four standard tasks—node classification, link prediction, graph classification, and graph regression—demonstrate that GPM outperforms state-of-the-art baselines. Further analysis reveals that GPM exhibits strong out-of-distribution generalization, desirable scalability, and enhanced interpretability. Code and datasets are available at: https://github.com/Zehong-Wang/GPM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wang25ep, title = {Beyond Message Passing: Neural Graph Pattern Machine}, author = {Wang, Zehong and Zhang, Zheyuan and Ma, Tianyi and Chawla, Nitesh V and Zhang, Chuxu and Ye, Yanfang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {65496--65517}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wang25ep/wang25ep.pdf}, url = {https://proceedings.mlr.press/v267/wang25ep.html}, abstract = {Graph learning tasks often hinge on identifying key substructure patterns—such as triadic closures in social networks or benzene rings in molecular graphs—that underpin downstream performance. However, most existing graph neural networks (GNNs) rely on message passing, which aggregates local neighborhood information iteratively and struggles to explicitly capture such fundamental motifs, like triangles, $k$-cliques, and rings. This limitation hinders both expressiveness and long-range dependency modeling. In this paper, we introduce the Neural Graph Pattern Machine (GPM), a novel framework that bypasses message passing by learning directly from graph substructures. GPM efficiently extracts, encodes, and prioritizes task-relevant graph patterns, offering greater expressivity and improved ability to capture long-range dependencies. Empirical evaluations across four standard tasks—node classification, link prediction, graph classification, and graph regression—demonstrate that GPM outperforms state-of-the-art baselines. Further analysis reveals that GPM exhibits strong out-of-distribution generalization, desirable scalability, and enhanced interpretability. Code and datasets are available at: https://github.com/Zehong-Wang/GPM.} }
Endnote
%0 Conference Paper %T Beyond Message Passing: Neural Graph Pattern Machine %A Zehong Wang %A Zheyuan Zhang %A Tianyi Ma %A Nitesh V Chawla %A Chuxu Zhang %A Yanfang Ye %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wang25ep %I PMLR %P 65496--65517 %U https://proceedings.mlr.press/v267/wang25ep.html %V 267 %X Graph learning tasks often hinge on identifying key substructure patterns—such as triadic closures in social networks or benzene rings in molecular graphs—that underpin downstream performance. However, most existing graph neural networks (GNNs) rely on message passing, which aggregates local neighborhood information iteratively and struggles to explicitly capture such fundamental motifs, like triangles, $k$-cliques, and rings. This limitation hinders both expressiveness and long-range dependency modeling. In this paper, we introduce the Neural Graph Pattern Machine (GPM), a novel framework that bypasses message passing by learning directly from graph substructures. GPM efficiently extracts, encodes, and prioritizes task-relevant graph patterns, offering greater expressivity and improved ability to capture long-range dependencies. Empirical evaluations across four standard tasks—node classification, link prediction, graph classification, and graph regression—demonstrate that GPM outperforms state-of-the-art baselines. Further analysis reveals that GPM exhibits strong out-of-distribution generalization, desirable scalability, and enhanced interpretability. Code and datasets are available at: https://github.com/Zehong-Wang/GPM.
APA
Wang, Z., Zhang, Z., Ma, T., Chawla, N.V., Zhang, C. & Ye, Y.. (2025). Beyond Message Passing: Neural Graph Pattern Machine. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:65496-65517 Available from https://proceedings.mlr.press/v267/wang25ep.html.

Related Material