Position: Future Directions in the Theory of Graph Machine Learning

Christopher Morris, Fabrizio Frasca, Nadav Dym, Haggai Maron, Ismail Ilkan Ceylan, Ron Levie, Derek Lim, Michael M. Bronstein, Martin Grohe, Stefanie Jegelka
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:36294-36307, 2024.

Abstract

Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete. Recent theoretical advancements primarily focus on elucidating the coarse-grained expressive power of GNNs, predominantly employing combinatorial techniques. However, these studies do not perfectly align with practice, particularly in understanding the generalization behavior of GNNs when trained with stochastic first-order optimization techniques. In this position paper, we argue that the graph machine learning community needs to shift its attention to developing a balanced theory of graph machine learning, focusing on a more thorough understanding of the interplay of expressive power, generalization, and optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-morris24a, title = {Position: Future Directions in the Theory of Graph Machine Learning}, author = {Morris, Christopher and Frasca, Fabrizio and Dym, Nadav and Maron, Haggai and Ceylan, Ismail Ilkan and Levie, Ron and Lim, Derek and Bronstein, Michael M. and Grohe, Martin and Jegelka, Stefanie}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {36294--36307}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/morris24a/morris24a.pdf}, url = {https://proceedings.mlr.press/v235/morris24a.html}, abstract = {Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete. Recent theoretical advancements primarily focus on elucidating the coarse-grained expressive power of GNNs, predominantly employing combinatorial techniques. However, these studies do not perfectly align with practice, particularly in understanding the generalization behavior of GNNs when trained with stochastic first-order optimization techniques. In this position paper, we argue that the graph machine learning community needs to shift its attention to developing a balanced theory of graph machine learning, focusing on a more thorough understanding of the interplay of expressive power, generalization, and optimization.} }
Endnote
%0 Conference Paper %T Position: Future Directions in the Theory of Graph Machine Learning %A Christopher Morris %A Fabrizio Frasca %A Nadav Dym %A Haggai Maron %A Ismail Ilkan Ceylan %A Ron Levie %A Derek Lim %A Michael M. Bronstein %A Martin Grohe %A Stefanie Jegelka %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-morris24a %I PMLR %P 36294--36307 %U https://proceedings.mlr.press/v235/morris24a.html %V 235 %X Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete. Recent theoretical advancements primarily focus on elucidating the coarse-grained expressive power of GNNs, predominantly employing combinatorial techniques. However, these studies do not perfectly align with practice, particularly in understanding the generalization behavior of GNNs when trained with stochastic first-order optimization techniques. In this position paper, we argue that the graph machine learning community needs to shift its attention to developing a balanced theory of graph machine learning, focusing on a more thorough understanding of the interplay of expressive power, generalization, and optimization.
APA
Morris, C., Frasca, F., Dym, N., Maron, H., Ceylan, I.I., Levie, R., Lim, D., Bronstein, M.M., Grohe, M. & Jegelka, S.. (2024). Position: Future Directions in the Theory of Graph Machine Learning. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:36294-36307 Available from https://proceedings.mlr.press/v235/morris24a.html.

Related Material