[edit]
Simple Path Structural Encoding for Graph Transformers
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:857-873, 2025.
Abstract
Graph transformers extend global self-attention to graph-structured data, achieving notable success in graph learning. Recently, Relative Random Walk Probabilities (RRWP) has been found to further enhance their predictive power by encoding both structural and positional information into the edge representation. However, RRWP cannot always distinguish between edges that belong to different local graph patterns, which reduces its ability to capture the full structural complexity of graphs. This work introduces Simple Path Structural Encoding (SPSE), a novel method that utilizes simple path counts for edge encoding. We show theoretically and experimentally that SPSE overcomes the limitations of RRWP, providing a richer representation of graph structures, particularly in capturing local cyclic patterns. To make SPSE computationally tractable, we propose an efficient approximate algorithm for simple path counting. SPSE demonstrates significant performance improvements over RRWP on various benchmarks, including molecular and long-range graph datasets, achieving statistically significant gains in discriminative tasks. These results pose SPSE as a powerful edge encoding alternative for enhancing the expressivity of graph transformers.