Distilling KGE black boxes into interpretable NeSy models

Rodrigo Castellano Ontiveros, Francesco Giannini, Michelangelo Diligenti
Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, PMLR 284:736-749, 2025.

Abstract

Knowledge Graph Embedding (KGE) models have shown remarkable performances in the knowledge graph completion task, thanks to their ability to capture and represent complex relational patterns. Indeed, modern KGEs encompass different inductive biases, which can account for relational patterns like reasoning compositional chains, symmetries, anti-symmetries, hierarchical patterns, etc. However, KGE models inherently lack interpretability, as their generalization capabilities are purely focused on mapping human interpretable units of information, like constants and predicates, into vector embeddings in a dense latent space, which is completely opaque to a human operator. On the other hand, different Neural-Symbolic (NeSy) methods have shown competitive results in knowledge completion tasks, but their focus on achieving high accuracy often leads to sacrificing interpretability. Many existing NeSy approaches, while inherently interpretable, resort to blending their predictions with opaque KGEs to boost performance, ultimately diminishing their explanatory power. This paper introduces a novel approach to address this limitation by applying a post-hoc NeSy method to KGE models. This strategy ensures both high fidelity to KGE models and the inherent interpretability of NeSy approaches. The proposed framework defines NeSy reasoners that generate explicit logic proofs using predefined or learned rules, ensuring transparent and explainable predictions. We evaluate the methodology using both accuracy and explainability-based metrics, demonstrating the effectiveness of our approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v284-ontiveros25a, title = {Distilling KGE black boxes into interpretable NeSy models}, author = {Ontiveros, Rodrigo Castellano and Giannini, Francesco and Diligenti, Michelangelo}, booktitle = {Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning}, pages = {736--749}, year = {2025}, editor = {H. Gilpin, Leilani and Giunchiglia, Eleonora and Hitzler, Pascal and van Krieken, Emile}, volume = {284}, series = {Proceedings of Machine Learning Research}, month = {08--10 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v284/main/assets/ontiveros25a/ontiveros25a.pdf}, url = {https://proceedings.mlr.press/v284/ontiveros25a.html}, abstract = {Knowledge Graph Embedding (KGE) models have shown remarkable performances in the knowledge graph completion task, thanks to their ability to capture and represent complex relational patterns. Indeed, modern KGEs encompass different inductive biases, which can account for relational patterns like reasoning compositional chains, symmetries, anti-symmetries, hierarchical patterns, etc. However, KGE models inherently lack interpretability, as their generalization capabilities are purely focused on mapping human interpretable units of information, like constants and predicates, into vector embeddings in a dense latent space, which is completely opaque to a human operator. On the other hand, different Neural-Symbolic (NeSy) methods have shown competitive results in knowledge completion tasks, but their focus on achieving high accuracy often leads to sacrificing interpretability. Many existing NeSy approaches, while inherently interpretable, resort to blending their predictions with opaque KGEs to boost performance, ultimately diminishing their explanatory power. This paper introduces a novel approach to address this limitation by applying a post-hoc NeSy method to KGE models. This strategy ensures both high fidelity to KGE models and the inherent interpretability of NeSy approaches. The proposed framework defines NeSy reasoners that generate explicit logic proofs using predefined or learned rules, ensuring transparent and explainable predictions. We evaluate the methodology using both accuracy and explainability-based metrics, demonstrating the effectiveness of our approach.} }
Endnote
%0 Conference Paper %T Distilling KGE black boxes into interpretable NeSy models %A Rodrigo Castellano Ontiveros %A Francesco Giannini %A Michelangelo Diligenti %B Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning %C Proceedings of Machine Learning Research %D 2025 %E Leilani H. Gilpin %E Eleonora Giunchiglia %E Pascal Hitzler %E Emile van Krieken %F pmlr-v284-ontiveros25a %I PMLR %P 736--749 %U https://proceedings.mlr.press/v284/ontiveros25a.html %V 284 %X Knowledge Graph Embedding (KGE) models have shown remarkable performances in the knowledge graph completion task, thanks to their ability to capture and represent complex relational patterns. Indeed, modern KGEs encompass different inductive biases, which can account for relational patterns like reasoning compositional chains, symmetries, anti-symmetries, hierarchical patterns, etc. However, KGE models inherently lack interpretability, as their generalization capabilities are purely focused on mapping human interpretable units of information, like constants and predicates, into vector embeddings in a dense latent space, which is completely opaque to a human operator. On the other hand, different Neural-Symbolic (NeSy) methods have shown competitive results in knowledge completion tasks, but their focus on achieving high accuracy often leads to sacrificing interpretability. Many existing NeSy approaches, while inherently interpretable, resort to blending their predictions with opaque KGEs to boost performance, ultimately diminishing their explanatory power. This paper introduces a novel approach to address this limitation by applying a post-hoc NeSy method to KGE models. This strategy ensures both high fidelity to KGE models and the inherent interpretability of NeSy approaches. The proposed framework defines NeSy reasoners that generate explicit logic proofs using predefined or learned rules, ensuring transparent and explainable predictions. We evaluate the methodology using both accuracy and explainability-based metrics, demonstrating the effectiveness of our approach.
APA
Ontiveros, R.C., Giannini, F. & Diligenti, M.. (2025). Distilling KGE black boxes into interpretable NeSy models. Proceedings of The 19th International Conference on Neurosymbolic Learning and Reasoning, in Proceedings of Machine Learning Research 284:736-749 Available from https://proceedings.mlr.press/v284/ontiveros25a.html.

Related Material