[edit]
Rethinking Higher-Order Representation Learning With Graph Neural Networks
Proceedings of the Second Learning on Graphs Conference, PMLR 231:38:1-38:25, 2024.
Abstract
In the field of graph machine learning, graph neural networks (GNNs) are promising models for learning graph representations and node representations. However, many GNNs perform poorly on learning higher-order representations such as links due to their limited expressive power. Zhang et al. summarize recent advances in link prediction and propose labeling trick as a common framework for learning node set representations with GNNs. However, their theory is limited to employing an ideally expressive GNN as the backend, and can only justify a limited series of link prediction models. In this paper, we take a further step to study the expressive power of various higher-order representation learning methods. Our analysis begins with showing the inherent symmetry between node labeling and higher-order GNNs, which directly justifies previous labeling trick methods (SEAL, GraIL) and other node labeling methods (ID-GNN, NBFNet), also higher-order GNNs through a unfied framework. Then, we study the utilization of MPNNs for computing representations in these methods, and show the expressive power upper bounds under these situations. After that, we provide a comprehensive analysis about how these previous methods surpass plain GNNs by showing their ability to capture path information. Finally, using the intuitions provided by the analysis, we propose an extremely simple method for link prediction tasks, which we believe could bring insights for designing more complicated and powerful models in the future.