Graph Positional Encoding via Random Feature Propagation

Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:9202-9223, 2023.

Abstract

Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two augmentation schemes. Here we propose a novel family of positional encoding schemes which draws a link between the above two approaches and improves over both. The new approach, named Random Feature Propagation (RFP), is inspired by the power iteration method and its generalizations. It concatenates several intermediate steps of an iterative algorithm for computing the dominant eigenvectors of a propagation matrix, starting from random node features. Notably, these propagation steps are based on graph-dependent propagation operators that can be either predefined or learned. We explore the theoretical and empirical benefits of RFP. First, we provide theoretical justifications for using random features, for incorporating early propagation steps, and for using multiple random initializations. Then, we empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-eliasof23a, title = {Graph Positional Encoding via Random Feature Propagation}, author = {Eliasof, Moshe and Frasca, Fabrizio and Bevilacqua, Beatrice and Treister, Eran and Chechik, Gal and Maron, Haggai}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {9202--9223}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/eliasof23a/eliasof23a.pdf}, url = {https://proceedings.mlr.press/v202/eliasof23a.html}, abstract = {Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two augmentation schemes. Here we propose a novel family of positional encoding schemes which draws a link between the above two approaches and improves over both. The new approach, named Random Feature Propagation (RFP), is inspired by the power iteration method and its generalizations. It concatenates several intermediate steps of an iterative algorithm for computing the dominant eigenvectors of a propagation matrix, starting from random node features. Notably, these propagation steps are based on graph-dependent propagation operators that can be either predefined or learned. We explore the theoretical and empirical benefits of RFP. First, we provide theoretical justifications for using random features, for incorporating early propagation steps, and for using multiple random initializations. Then, we empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.} }
Endnote
%0 Conference Paper %T Graph Positional Encoding via Random Feature Propagation %A Moshe Eliasof %A Fabrizio Frasca %A Beatrice Bevilacqua %A Eran Treister %A Gal Chechik %A Haggai Maron %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-eliasof23a %I PMLR %P 9202--9223 %U https://proceedings.mlr.press/v202/eliasof23a.html %V 202 %X Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two augmentation schemes. Here we propose a novel family of positional encoding schemes which draws a link between the above two approaches and improves over both. The new approach, named Random Feature Propagation (RFP), is inspired by the power iteration method and its generalizations. It concatenates several intermediate steps of an iterative algorithm for computing the dominant eigenvectors of a propagation matrix, starting from random node features. Notably, these propagation steps are based on graph-dependent propagation operators that can be either predefined or learned. We explore the theoretical and empirical benefits of RFP. First, we provide theoretical justifications for using random features, for incorporating early propagation steps, and for using multiple random initializations. Then, we empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks.
APA
Eliasof, M., Frasca, F., Bevilacqua, B., Treister, E., Chechik, G. & Maron, H.. (2023). Graph Positional Encoding via Random Feature Propagation. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:9202-9223 Available from https://proceedings.mlr.press/v202/eliasof23a.html.

Related Material