The ART of Link Prediction with KGEs

Yannick Brunink, Michael Cochez, Jacopo Urbani
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:519-539, 2025.

Abstract

Link Prediction (LP) in Knowledge Graphs (KGs) is typically framed as ranking candidate entities for a query of the form $(entity, relation,?)$, with models evaluated on their ability to rank the correct entities for each query. At the same time, Knowledge Graph Embedding (KGE) models used for this task produce unnormalised scores, making it unclear how to interpret their belief in the truthfulness of triples across different queries. Together, these two factors create a blind spot: models can achieve perfect rankings while assigning scores that are not comparable across queries, limiting their utility in downstream tasks or even in identifying the most plausible triples overall. Indeed, this issue becomes clear when test triples are ranked globally and evaluated with IR metrics, revealing that models with unnormalized scores often perform poorly due to inconsistent scoring across queries. To address this problem, we propose a new KGE model, called ART, which exploits probabilistic Auto-Regressive modelling and hence is normalised by design. Despite its conceptual simplicity, we show that ART outperforms prior art for discriminative and generative LP as well as other post-hoc calibration techniques.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-brunink25a, title = {The ART of Link Prediction with KGEs}, author = {Brunink, Yannick and Cochez, Michael and Urbani, Jacopo}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {519--539}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/brunink25a/brunink25a.pdf}, url = {https://proceedings.mlr.press/v284/brunink25a.html}, abstract = {Link Prediction (LP) in Knowledge Graphs (KGs) is typically framed as ranking candidate entities for a query of the form $(entity, relation,?)$, with models evaluated on their ability to rank the correct entities for each query. At the same time, Knowledge Graph Embedding (KGE) models used for this task produce unnormalised scores, making it unclear how to interpret their belief in the truthfulness of triples across different queries. Together, these two factors create a blind spot: models can achieve perfect rankings while assigning scores that are not comparable across queries, limiting their utility in downstream tasks or even in identifying the most plausible triples overall. Indeed, this issue becomes clear when test triples are ranked globally and evaluated with IR metrics, revealing that models with unnormalized scores often perform poorly due to inconsistent scoring across queries. To address this problem, we propose a new KGE model, called ART, which exploits probabilistic Auto-Regressive modelling and hence is normalised by design. Despite its conceptual simplicity, we show that ART outperforms prior art for discriminative and generative LP as well as other post-hoc calibration techniques.} }
Endnote
%0 Conference Paper %T The ART of Link Prediction with KGEs %A Yannick Brunink %A Michael Cochez %A Jacopo Urbani %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-brunink25a %I PMLR %P 519--539 %U https://proceedings.mlr.press/v284/brunink25a.html %V 284 %X Link Prediction (LP) in Knowledge Graphs (KGs) is typically framed as ranking candidate entities for a query of the form $(entity, relation,?)$, with models evaluated on their ability to rank the correct entities for each query. At the same time, Knowledge Graph Embedding (KGE) models used for this task produce unnormalised scores, making it unclear how to interpret their belief in the truthfulness of triples across different queries. Together, these two factors create a blind spot: models can achieve perfect rankings while assigning scores that are not comparable across queries, limiting their utility in downstream tasks or even in identifying the most plausible triples overall. Indeed, this issue becomes clear when test triples are ranked globally and evaluated with IR metrics, revealing that models with unnormalized scores often perform poorly due to inconsistent scoring across queries. To address this problem, we propose a new KGE model, called ART, which exploits probabilistic Auto-Regressive modelling and hence is normalised by design. Despite its conceptual simplicity, we show that ART outperforms prior art for discriminative and generative LP as well as other post-hoc calibration techniques.
APA
Brunink, Y., Cochez, M. & Urbani, J.. (2025). The ART of Link Prediction with KGEs. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:519-539 Available from https://proceedings.mlr.press/v284/brunink25a.html.

Related Material