Rethinking Higher-Order Representation Learning With Graph Neural Networks

Tuo Xu, Lei Zou
Proceedings of the Second Learning on Graphs Conference, PMLR 231:38:1-38:25, 2024.

Abstract

In the field of graph machine learning, graph neural networks (GNNs) are promising models for learning graph representations and node representations. However, many GNNs perform poorly on learning higher-order representations such as links due to their limited expressive power. Zhang et al. summarize recent advances in link prediction and propose labeling trick as a common framework for learning node set representations with GNNs. However, their theory is limited to employing an ideally expressive GNN as the backend, and can only justify a limited series of link prediction models. In this paper, we take a further step to study the expressive power of various higher-order representation learning methods. Our analysis begins with showing the inherent symmetry between node labeling and higher-order GNNs, which directly justifies previous labeling trick methods (SEAL, GraIL) and other node labeling methods (ID-GNN, NBFNet), also higher-order GNNs through a unfied framework. Then, we study the utilization of MPNNs for computing representations in these methods, and show the expressive power upper bounds under these situations. After that, we provide a comprehensive analysis about how these previous methods surpass plain GNNs by showing their ability to capture path information. Finally, using the intuitions provided by the analysis, we propose an extremely simple method for link prediction tasks, which we believe could bring insights for designing more complicated and powerful models in the future.

Cite this Paper


BibTeX
@InProceedings{pmlr-v231-xu24a, title = {Rethinking Higher-Order Representation Learning With Graph Neural Networks}, author = {Xu, Tuo and Zou, Lei}, booktitle = {Proceedings of the Second Learning on Graphs Conference}, pages = {38:1--38:25}, year = {2024}, editor = {Villar, Soledad and Chamberlain, Benjamin}, volume = {231}, series = {Proceedings of Machine Learning Research}, month = {27--30 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v231/xu24a/xu24a.pdf}, url = {https://proceedings.mlr.press/v231/xu24a.html}, abstract = {In the field of graph machine learning, graph neural networks (GNNs) are promising models for learning graph representations and node representations. However, many GNNs perform poorly on learning higher-order representations such as links due to their limited expressive power. Zhang et al. summarize recent advances in link prediction and propose labeling trick as a common framework for learning node set representations with GNNs. However, their theory is limited to employing an ideally expressive GNN as the backend, and can only justify a limited series of link prediction models. In this paper, we take a further step to study the expressive power of various higher-order representation learning methods. Our analysis begins with showing the inherent symmetry between node labeling and higher-order GNNs, which directly justifies previous labeling trick methods (SEAL, GraIL) and other node labeling methods (ID-GNN, NBFNet), also higher-order GNNs through a unfied framework. Then, we study the utilization of MPNNs for computing representations in these methods, and show the expressive power upper bounds under these situations. After that, we provide a comprehensive analysis about how these previous methods surpass plain GNNs by showing their ability to capture path information. Finally, using the intuitions provided by the analysis, we propose an extremely simple method for link prediction tasks, which we believe could bring insights for designing more complicated and powerful models in the future.} }
Endnote
%0 Conference Paper %T Rethinking Higher-Order Representation Learning With Graph Neural Networks %A Tuo Xu %A Lei Zou %B Proceedings of the Second Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2024 %E Soledad Villar %E Benjamin Chamberlain %F pmlr-v231-xu24a %I PMLR %P 38:1--38:25 %U https://proceedings.mlr.press/v231/xu24a.html %V 231 %X In the field of graph machine learning, graph neural networks (GNNs) are promising models for learning graph representations and node representations. However, many GNNs perform poorly on learning higher-order representations such as links due to their limited expressive power. Zhang et al. summarize recent advances in link prediction and propose labeling trick as a common framework for learning node set representations with GNNs. However, their theory is limited to employing an ideally expressive GNN as the backend, and can only justify a limited series of link prediction models. In this paper, we take a further step to study the expressive power of various higher-order representation learning methods. Our analysis begins with showing the inherent symmetry between node labeling and higher-order GNNs, which directly justifies previous labeling trick methods (SEAL, GraIL) and other node labeling methods (ID-GNN, NBFNet), also higher-order GNNs through a unfied framework. Then, we study the utilization of MPNNs for computing representations in these methods, and show the expressive power upper bounds under these situations. After that, we provide a comprehensive analysis about how these previous methods surpass plain GNNs by showing their ability to capture path information. Finally, using the intuitions provided by the analysis, we propose an extremely simple method for link prediction tasks, which we believe could bring insights for designing more complicated and powerful models in the future.
APA
Xu, T. & Zou, L.. (2024). Rethinking Higher-Order Representation Learning With Graph Neural Networks. Proceedings of the Second Learning on Graphs Conference, in Proceedings of Machine Learning Research 231:38:1-38:25 Available from https://proceedings.mlr.press/v231/xu24a.html.

Related Material