Position: Spectral GNNs Rely Less on Graph Fourier Basis than Conceived

Yuhe Guo, Huayi Tang, Jiahong Ma, Hongteng Xu, Zhewei Wei
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:81471-81493, 2025.

Abstract

Spectral graph learning builds upon two foundations: Graph Fourier basis as its theoretical cornerstone,with polynomial approximation to enable practical implementation. While this framework has led to numerous successful designs, we argue that its effectiveness might stem from mechanisms different from its theoretical foundations. In this paper, we identify two fundamental issues that challenge our current understanding: (1) The graph Fourier basis $\mathbf{U}$ (eigenvectors of the normalized graph Laplacian) faces too many questions to truly serve its intended role, particularly in preserving its semantic properties of Fourier analysis; (2) The limitations preventing expressive filters are not merely practical constraints, but fundamental barriers that naturally protect stability and generalization. Importantly, the two issues entangle with each other. The second obscured the first: the natural avoidance of complex filters has prevented us from fully confronting the questions about $\mathbf{U}$’s role as a Fourier basis. This observation leads to our position: the effectiveness of spectral GNNs relies less on Graph Fourier basis than originally conceived, or, in other words, spectral GNNs might not be so spectral. The position leads us to at least two potential research interests: to incorporate a more semantically meaningful graph dictionary except for $\mathbf{U}$, and to re-examine the theoretical role of the introduced polynomial techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-guo25w, title = {Position: Spectral {GNN}s Rely Less on Graph {F}ourier Basis than Conceived}, author = {Guo, Yuhe and Tang, Huayi and Ma, Jiahong and Xu, Hongteng and Wei, Zhewei}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {81471--81493}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/guo25w/guo25w.pdf}, url = {https://proceedings.mlr.press/v267/guo25w.html}, abstract = {Spectral graph learning builds upon two foundations: Graph Fourier basis as its theoretical cornerstone,with polynomial approximation to enable practical implementation. While this framework has led to numerous successful designs, we argue that its effectiveness might stem from mechanisms different from its theoretical foundations. In this paper, we identify two fundamental issues that challenge our current understanding: (1) The graph Fourier basis $\mathbf{U}$ (eigenvectors of the normalized graph Laplacian) faces too many questions to truly serve its intended role, particularly in preserving its semantic properties of Fourier analysis; (2) The limitations preventing expressive filters are not merely practical constraints, but fundamental barriers that naturally protect stability and generalization. Importantly, the two issues entangle with each other. The second obscured the first: the natural avoidance of complex filters has prevented us from fully confronting the questions about $\mathbf{U}$’s role as a Fourier basis. This observation leads to our position: the effectiveness of spectral GNNs relies less on Graph Fourier basis than originally conceived, or, in other words, spectral GNNs might not be so spectral. The position leads us to at least two potential research interests: to incorporate a more semantically meaningful graph dictionary except for $\mathbf{U}$, and to re-examine the theoretical role of the introduced polynomial techniques.} }
Endnote
%0 Conference Paper %T Position: Spectral GNNs Rely Less on Graph Fourier Basis than Conceived %A Yuhe Guo %A Huayi Tang %A Jiahong Ma %A Hongteng Xu %A Zhewei Wei %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-guo25w %I PMLR %P 81471--81493 %U https://proceedings.mlr.press/v267/guo25w.html %V 267 %X Spectral graph learning builds upon two foundations: Graph Fourier basis as its theoretical cornerstone,with polynomial approximation to enable practical implementation. While this framework has led to numerous successful designs, we argue that its effectiveness might stem from mechanisms different from its theoretical foundations. In this paper, we identify two fundamental issues that challenge our current understanding: (1) The graph Fourier basis $\mathbf{U}$ (eigenvectors of the normalized graph Laplacian) faces too many questions to truly serve its intended role, particularly in preserving its semantic properties of Fourier analysis; (2) The limitations preventing expressive filters are not merely practical constraints, but fundamental barriers that naturally protect stability and generalization. Importantly, the two issues entangle with each other. The second obscured the first: the natural avoidance of complex filters has prevented us from fully confronting the questions about $\mathbf{U}$’s role as a Fourier basis. This observation leads to our position: the effectiveness of spectral GNNs relies less on Graph Fourier basis than originally conceived, or, in other words, spectral GNNs might not be so spectral. The position leads us to at least two potential research interests: to incorporate a more semantically meaningful graph dictionary except for $\mathbf{U}$, and to re-examine the theoretical role of the introduced polynomial techniques.
APA
Guo, Y., Tang, H., Ma, J., Xu, H. & Wei, Z.. (2025). Position: Spectral GNNs Rely Less on Graph Fourier Basis than Conceived. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:81471-81493 Available from https://proceedings.mlr.press/v267/guo25w.html.

Related Material