Stability and Generalization Capability of Subgraph Reasoning Models for Inductive Knowledge Graph Completion

Minsung Hwang, Jaejun Lee, Joyce Jiyoung Whang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:26376-26411, 2025.

Abstract

Inductive knowledge graph completion aims to predict missing triplets in an incomplete knowledge graph that differs from the one observed during training. While subgraph reasoning models have demonstrated empirical success in this task, their theoretical properties, such as stability and generalization capability, remain unexplored. In this work, we present the first theoretical analysis of the relationship between the stability and the generalization capability for subgraph reasoning models. Specifically, we define stability as the degree of consistency in a subgraph reasoning model’s outputs in response to differences in input subgraphs and introduce the Relational Tree Mover’s Distance as a metric to quantify the differences between the subgraphs. We then show that the generalization capability of subgraph reasoning models, defined as the discrepancy between the performance on training data and test data, is proportional to their stability. Furthermore, we empirically analyze the impact of stability on generalization capability using real-world datasets, validating our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-hwang25a, title = {Stability and Generalization Capability of Subgraph Reasoning Models for Inductive Knowledge Graph Completion}, author = {Hwang, Minsung and Lee, Jaejun and Whang, Joyce Jiyoung}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {26376--26411}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/hwang25a/hwang25a.pdf}, url = {https://proceedings.mlr.press/v267/hwang25a.html}, abstract = {Inductive knowledge graph completion aims to predict missing triplets in an incomplete knowledge graph that differs from the one observed during training. While subgraph reasoning models have demonstrated empirical success in this task, their theoretical properties, such as stability and generalization capability, remain unexplored. In this work, we present the first theoretical analysis of the relationship between the stability and the generalization capability for subgraph reasoning models. Specifically, we define stability as the degree of consistency in a subgraph reasoning model’s outputs in response to differences in input subgraphs and introduce the Relational Tree Mover’s Distance as a metric to quantify the differences between the subgraphs. We then show that the generalization capability of subgraph reasoning models, defined as the discrepancy between the performance on training data and test data, is proportional to their stability. Furthermore, we empirically analyze the impact of stability on generalization capability using real-world datasets, validating our theoretical findings.} }
Endnote
%0 Conference Paper %T Stability and Generalization Capability of Subgraph Reasoning Models for Inductive Knowledge Graph Completion %A Minsung Hwang %A Jaejun Lee %A Joyce Jiyoung Whang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-hwang25a %I PMLR %P 26376--26411 %U https://proceedings.mlr.press/v267/hwang25a.html %V 267 %X Inductive knowledge graph completion aims to predict missing triplets in an incomplete knowledge graph that differs from the one observed during training. While subgraph reasoning models have demonstrated empirical success in this task, their theoretical properties, such as stability and generalization capability, remain unexplored. In this work, we present the first theoretical analysis of the relationship between the stability and the generalization capability for subgraph reasoning models. Specifically, we define stability as the degree of consistency in a subgraph reasoning model’s outputs in response to differences in input subgraphs and introduce the Relational Tree Mover’s Distance as a metric to quantify the differences between the subgraphs. We then show that the generalization capability of subgraph reasoning models, defined as the discrepancy between the performance on training data and test data, is proportional to their stability. Furthermore, we empirically analyze the impact of stability on generalization capability using real-world datasets, validating our theoretical findings.
APA
Hwang, M., Lee, J. & Whang, J.J.. (2025). Stability and Generalization Capability of Subgraph Reasoning Models for Inductive Knowledge Graph Completion. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:26376-26411 Available from https://proceedings.mlr.press/v267/hwang25a.html.

Related Material