Towards characterizing the value of edge embeddings in Graph Neural Networks

Dhruv Rohatgi, Tanya Marwah, Zachary Chase Lipton, Jianfeng Lu, Ankur Moitra, Andrej Risteski
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:51905-51923, 2025.

Abstract

Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts—frequently significantly so in topologies that have "hub" nodes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-rohatgi25a, title = {Towards characterizing the value of edge embeddings in Graph Neural Networks}, author = {Rohatgi, Dhruv and Marwah, Tanya and Lipton, Zachary Chase and Lu, Jianfeng and Moitra, Ankur and Risteski, Andrej}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {51905--51923}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/rohatgi25a/rohatgi25a.pdf}, url = {https://proceedings.mlr.press/v267/rohatgi25a.html}, abstract = {Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts—frequently significantly so in topologies that have "hub" nodes.} }
Endnote
%0 Conference Paper %T Towards characterizing the value of edge embeddings in Graph Neural Networks %A Dhruv Rohatgi %A Tanya Marwah %A Zachary Chase Lipton %A Jianfeng Lu %A Ankur Moitra %A Andrej Risteski %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-rohatgi25a %I PMLR %P 51905--51923 %U https://proceedings.mlr.press/v267/rohatgi25a.html %V 267 %X Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts—frequently significantly so in topologies that have "hub" nodes.
APA
Rohatgi, D., Marwah, T., Lipton, Z.C., Lu, J., Moitra, A. & Risteski, A.. (2025). Towards characterizing the value of edge embeddings in Graph Neural Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:51905-51923 Available from https://proceedings.mlr.press/v267/rohatgi25a.html.

Related Material