Simple Path Structural Encoding for Graph Transformers

Louis Airale, Antonio Longa, Mattia Rigon, Andrea Passerini, Roberto Passerone
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:857-873, 2025.

Abstract

Graph transformers extend global self-attention to graph-structured data, achieving notable success in graph learning. Recently, Relative Random Walk Probabilities (RRWP) has been found to further enhance their predictive power by encoding both structural and positional information into the edge representation. However, RRWP cannot always distinguish between edges that belong to different local graph patterns, which reduces its ability to capture the full structural complexity of graphs. This work introduces Simple Path Structural Encoding (SPSE), a novel method that utilizes simple path counts for edge encoding. We show theoretically and experimentally that SPSE overcomes the limitations of RRWP, providing a richer representation of graph structures, particularly in capturing local cyclic patterns. To make SPSE computationally tractable, we propose an efficient approximate algorithm for simple path counting. SPSE demonstrates significant performance improvements over RRWP on various benchmarks, including molecular and long-range graph datasets, achieving statistically significant gains in discriminative tasks. These results pose SPSE as a powerful edge encoding alternative for enhancing the expressivity of graph transformers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-airale25a, title = {Simple Path Structural Encoding for Graph Transformers}, author = {Airale, Louis and Longa, Antonio and Rigon, Mattia and Passerini, Andrea and Passerone, Roberto}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {857--873}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/airale25a/airale25a.pdf}, url = {https://proceedings.mlr.press/v267/airale25a.html}, abstract = {Graph transformers extend global self-attention to graph-structured data, achieving notable success in graph learning. Recently, Relative Random Walk Probabilities (RRWP) has been found to further enhance their predictive power by encoding both structural and positional information into the edge representation. However, RRWP cannot always distinguish between edges that belong to different local graph patterns, which reduces its ability to capture the full structural complexity of graphs. This work introduces Simple Path Structural Encoding (SPSE), a novel method that utilizes simple path counts for edge encoding. We show theoretically and experimentally that SPSE overcomes the limitations of RRWP, providing a richer representation of graph structures, particularly in capturing local cyclic patterns. To make SPSE computationally tractable, we propose an efficient approximate algorithm for simple path counting. SPSE demonstrates significant performance improvements over RRWP on various benchmarks, including molecular and long-range graph datasets, achieving statistically significant gains in discriminative tasks. These results pose SPSE as a powerful edge encoding alternative for enhancing the expressivity of graph transformers.} }
Endnote
%0 Conference Paper %T Simple Path Structural Encoding for Graph Transformers %A Louis Airale %A Antonio Longa %A Mattia Rigon %A Andrea Passerini %A Roberto Passerone %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-airale25a %I PMLR %P 857--873 %U https://proceedings.mlr.press/v267/airale25a.html %V 267 %X Graph transformers extend global self-attention to graph-structured data, achieving notable success in graph learning. Recently, Relative Random Walk Probabilities (RRWP) has been found to further enhance their predictive power by encoding both structural and positional information into the edge representation. However, RRWP cannot always distinguish between edges that belong to different local graph patterns, which reduces its ability to capture the full structural complexity of graphs. This work introduces Simple Path Structural Encoding (SPSE), a novel method that utilizes simple path counts for edge encoding. We show theoretically and experimentally that SPSE overcomes the limitations of RRWP, providing a richer representation of graph structures, particularly in capturing local cyclic patterns. To make SPSE computationally tractable, we propose an efficient approximate algorithm for simple path counting. SPSE demonstrates significant performance improvements over RRWP on various benchmarks, including molecular and long-range graph datasets, achieving statistically significant gains in discriminative tasks. These results pose SPSE as a powerful edge encoding alternative for enhancing the expressivity of graph transformers.
APA
Airale, L., Longa, A., Rigon, M., Passerini, A. & Passerone, R.. (2025). Simple Path Structural Encoding for Graph Transformers. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:857-873 Available from https://proceedings.mlr.press/v267/airale25a.html.

Related Material