From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers

Yoon Hyeok Lee, Jaemin Park, Taejin Paik, Doyun Kim, Bosun Hwang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:33688-33714, 2025.

Abstract

Graph Transformers (GTs) have emerged as a powerful alternative to message-passing neural networks, yet their performance heavily depends on effectively embedding structural inductive biases. In this work, we introduce novel structural encodings (SEs) grounded in a rigorous analysis of random walks (RWs), leveraging Green and Martin kernels that we have carefully redefined for AI applications while preserving their mathematical essence.These kernels capture the long-term behavior of RWs on graphs and allow for enhanced representation of complex topologies, including non-aperiodic and directed acyclic substructures.Empirical evaluations across eight benchmark datasets demonstrate strong performance across diverse tasks, notably in molecular and circuit domains.We attribute this performance boost to the improved ability of our kernel-based SEs to encode intricate structural information, thereby strengthening the global attention and inductive bias within GTs.This work highlights the effectiveness of theoretically grounded kernel methods in advancing Transformer-based models for graph learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lee25ae, title = {From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers}, author = {Lee, Yoon Hyeok and Park, Jaemin and Paik, Taejin and Kim, Doyun and Hwang, Bosun}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {33688--33714}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lee25ae/lee25ae.pdf}, url = {https://proceedings.mlr.press/v267/lee25ae.html}, abstract = {Graph Transformers (GTs) have emerged as a powerful alternative to message-passing neural networks, yet their performance heavily depends on effectively embedding structural inductive biases. In this work, we introduce novel structural encodings (SEs) grounded in a rigorous analysis of random walks (RWs), leveraging Green and Martin kernels that we have carefully redefined for AI applications while preserving their mathematical essence.These kernels capture the long-term behavior of RWs on graphs and allow for enhanced representation of complex topologies, including non-aperiodic and directed acyclic substructures.Empirical evaluations across eight benchmark datasets demonstrate strong performance across diverse tasks, notably in molecular and circuit domains.We attribute this performance boost to the improved ability of our kernel-based SEs to encode intricate structural information, thereby strengthening the global attention and inductive bias within GTs.This work highlights the effectiveness of theoretically grounded kernel methods in advancing Transformer-based models for graph learning.} }
Endnote
%0 Conference Paper %T From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers %A Yoon Hyeok Lee %A Jaemin Park %A Taejin Paik %A Doyun Kim %A Bosun Hwang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lee25ae %I PMLR %P 33688--33714 %U https://proceedings.mlr.press/v267/lee25ae.html %V 267 %X Graph Transformers (GTs) have emerged as a powerful alternative to message-passing neural networks, yet their performance heavily depends on effectively embedding structural inductive biases. In this work, we introduce novel structural encodings (SEs) grounded in a rigorous analysis of random walks (RWs), leveraging Green and Martin kernels that we have carefully redefined for AI applications while preserving their mathematical essence.These kernels capture the long-term behavior of RWs on graphs and allow for enhanced representation of complex topologies, including non-aperiodic and directed acyclic substructures.Empirical evaluations across eight benchmark datasets demonstrate strong performance across diverse tasks, notably in molecular and circuit domains.We attribute this performance boost to the improved ability of our kernel-based SEs to encode intricate structural information, thereby strengthening the global attention and inductive bias within GTs.This work highlights the effectiveness of theoretically grounded kernel methods in advancing Transformer-based models for graph learning.
APA
Lee, Y.H., Park, J., Paik, T., Kim, D. & Hwang, B.. (2025). From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:33688-33714 Available from https://proceedings.mlr.press/v267/lee25ae.html.

Related Material