Improving Soft Unification with Knowledge Graph Embedding Methods

Xuanming Cui, Chionh Wei Peng, Adriel Kuek, Ser-Nam Lim
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:11543-11562, 2025.

Abstract

Neural Theorem Provers (NTPs) present a promising framework for neuro-symbolic reasoning, combining end-to-end differentiability with the interpretability of symbolic logic programming. However, optimizing NTPs remains a significant challenge due to their complex objective landscape and gradient sparcity. On the other hand, Knowledge Graph Embedding (KGE) methods offer smooth optimization with well-defined learning objectives but often lack interpretability. In this work, we propose several strategies to integrate the strengths of NTPs and KGEs, and demonstrate substantial improvements in both accuracy and computational efficiency. Specifically, we show that by leveraging the strength of structural learning in KGEs, we can greatly improve NTPs’ poorly structured embedding space, while by substituting NTPs with efficient KGE operations, we can significantly reduce evaluation time by over 1000$\times$ on large-scale dataset such as WN18RR with a mild accuracy trade-off.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-cui25b, title = {Improving Soft Unification with Knowledge Graph Embedding Methods}, author = {Cui, Xuanming and Peng, Chionh Wei and Kuek, Adriel and Lim, Ser-Nam}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {11543--11562}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/cui25b/cui25b.pdf}, url = {https://proceedings.mlr.press/v267/cui25b.html}, abstract = {Neural Theorem Provers (NTPs) present a promising framework for neuro-symbolic reasoning, combining end-to-end differentiability with the interpretability of symbolic logic programming. However, optimizing NTPs remains a significant challenge due to their complex objective landscape and gradient sparcity. On the other hand, Knowledge Graph Embedding (KGE) methods offer smooth optimization with well-defined learning objectives but often lack interpretability. In this work, we propose several strategies to integrate the strengths of NTPs and KGEs, and demonstrate substantial improvements in both accuracy and computational efficiency. Specifically, we show that by leveraging the strength of structural learning in KGEs, we can greatly improve NTPs’ poorly structured embedding space, while by substituting NTPs with efficient KGE operations, we can significantly reduce evaluation time by over 1000$\times$ on large-scale dataset such as WN18RR with a mild accuracy trade-off.} }
Endnote
%0 Conference Paper %T Improving Soft Unification with Knowledge Graph Embedding Methods %A Xuanming Cui %A Chionh Wei Peng %A Adriel Kuek %A Ser-Nam Lim %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-cui25b %I PMLR %P 11543--11562 %U https://proceedings.mlr.press/v267/cui25b.html %V 267 %X Neural Theorem Provers (NTPs) present a promising framework for neuro-symbolic reasoning, combining end-to-end differentiability with the interpretability of symbolic logic programming. However, optimizing NTPs remains a significant challenge due to their complex objective landscape and gradient sparcity. On the other hand, Knowledge Graph Embedding (KGE) methods offer smooth optimization with well-defined learning objectives but often lack interpretability. In this work, we propose several strategies to integrate the strengths of NTPs and KGEs, and demonstrate substantial improvements in both accuracy and computational efficiency. Specifically, we show that by leveraging the strength of structural learning in KGEs, we can greatly improve NTPs’ poorly structured embedding space, while by substituting NTPs with efficient KGE operations, we can significantly reduce evaluation time by over 1000$\times$ on large-scale dataset such as WN18RR with a mild accuracy trade-off.
APA
Cui, X., Peng, C.W., Kuek, A. & Lim, S.. (2025). Improving Soft Unification with Knowledge Graph Embedding Methods. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:11543-11562 Available from https://proceedings.mlr.press/v267/cui25b.html.

Related Material