How Expressive are Knowledge Graph Foundation Models?

Xingyue Huang, Pablo Barcelo, Michael M. Bronstein, Ismail Ilkan Ceylan, Mikhail Galkin, Juan L Reutter, Miguel Romero Orth
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:25021-25058, 2025.

Abstract

Knowledge Graph Foundation Models (KGFMs) are at the frontier for deep learning on knowledge graphs (KGs), as they can generalize to completely novel knowledge graphs with different relational vocabularies. Despite their empirical success, our theoretical understanding of KGFMs remains very limited. In this paper, we conduct a rigorous study of the expressive power of KGFMs. Specifically, we show that the expressive power of KGFMs directly depends on the motifs that are used to learn the relation representations. We then observe that the most typical motifs used in the existing literature are binary, as the representations are learned based on how pairs of relations interact, which limits the model’s expressiveness. As part of our study, we design more expressive KGFMs using richer motifs, which necessitate learning relation representations based on, e.g., how triples of relations interact with each other. Finally, we empirically validate our theoretical findings, showing that the use of richer motifs results in better performance on a wide range of datasets drawn from different domains.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-huang25a, title = {How Expressive are Knowledge Graph Foundation Models?}, author = {Huang, Xingyue and Barcelo, Pablo and Bronstein, Michael M. and Ceylan, Ismail Ilkan and Galkin, Mikhail and Reutter, Juan L and Romero Orth, Miguel}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {25021--25058}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/huang25a/huang25a.pdf}, url = {https://proceedings.mlr.press/v267/huang25a.html}, abstract = {Knowledge Graph Foundation Models (KGFMs) are at the frontier for deep learning on knowledge graphs (KGs), as they can generalize to completely novel knowledge graphs with different relational vocabularies. Despite their empirical success, our theoretical understanding of KGFMs remains very limited. In this paper, we conduct a rigorous study of the expressive power of KGFMs. Specifically, we show that the expressive power of KGFMs directly depends on the motifs that are used to learn the relation representations. We then observe that the most typical motifs used in the existing literature are binary, as the representations are learned based on how pairs of relations interact, which limits the model’s expressiveness. As part of our study, we design more expressive KGFMs using richer motifs, which necessitate learning relation representations based on, e.g., how triples of relations interact with each other. Finally, we empirically validate our theoretical findings, showing that the use of richer motifs results in better performance on a wide range of datasets drawn from different domains.} }
Endnote
%0 Conference Paper %T How Expressive are Knowledge Graph Foundation Models? %A Xingyue Huang %A Pablo Barcelo %A Michael M. Bronstein %A Ismail Ilkan Ceylan %A Mikhail Galkin %A Juan L Reutter %A Miguel Romero Orth %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-huang25a %I PMLR %P 25021--25058 %U https://proceedings.mlr.press/v267/huang25a.html %V 267 %X Knowledge Graph Foundation Models (KGFMs) are at the frontier for deep learning on knowledge graphs (KGs), as they can generalize to completely novel knowledge graphs with different relational vocabularies. Despite their empirical success, our theoretical understanding of KGFMs remains very limited. In this paper, we conduct a rigorous study of the expressive power of KGFMs. Specifically, we show that the expressive power of KGFMs directly depends on the motifs that are used to learn the relation representations. We then observe that the most typical motifs used in the existing literature are binary, as the representations are learned based on how pairs of relations interact, which limits the model’s expressiveness. As part of our study, we design more expressive KGFMs using richer motifs, which necessitate learning relation representations based on, e.g., how triples of relations interact with each other. Finally, we empirically validate our theoretical findings, showing that the use of richer motifs results in better performance on a wide range of datasets drawn from different domains.
APA
Huang, X., Barcelo, P., Bronstein, M.M., Ceylan, I.I., Galkin, M., Reutter, J.L. & Romero Orth, M.. (2025). How Expressive are Knowledge Graph Foundation Models?. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:25021-25058 Available from https://proceedings.mlr.press/v267/huang25a.html.

Related Material