Knowledge Graph Large Language Model (KG-LLM) for Link Prediction

Dong Shu, Tianle Chen, Mingyu Jin, Chong Zhang, Mengnan Du, Yongfeng Zhang
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:143-158, 2025.

Abstract

The task of multi-hop link prediction within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, as it requires the model to reason through and understand all intermediate connections before making a prediction. In this paper, we introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks. We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs to enhance multi-hop link prediction in KGs. By converting the KG to natural language prompts, our framework is designed to learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma. Further, we explore the framework’s potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models’ generalization capabilities, leading to more accurate predictions in unfamiliar scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-shu25a, title = {Knowledge Graph Large Language Model (KG-LLM) for Link Prediction}, author = {Shu, Dong and Chen, Tianle and Jin, Mingyu and Zhang, Chong and Du, Mengnan and Zhang, Yongfeng}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {143--158}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/shu25a/shu25a.pdf}, url = {https://proceedings.mlr.press/v260/shu25a.html}, abstract = {The task of multi-hop link prediction within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, as it requires the model to reason through and understand all intermediate connections before making a prediction. In this paper, we introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks. We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs to enhance multi-hop link prediction in KGs. By converting the KG to natural language prompts, our framework is designed to learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma. Further, we explore the framework’s potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models’ generalization capabilities, leading to more accurate predictions in unfamiliar scenarios.} }
Endnote
%0 Conference Paper %T Knowledge Graph Large Language Model (KG-LLM) for Link Prediction %A Dong Shu %A Tianle Chen %A Mingyu Jin %A Chong Zhang %A Mengnan Du %A Yongfeng Zhang %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-shu25a %I PMLR %P 143--158 %U https://proceedings.mlr.press/v260/shu25a.html %V 260 %X The task of multi-hop link prediction within knowledge graphs (KGs) stands as a challenge in the field of knowledge graph analysis, as it requires the model to reason through and understand all intermediate connections before making a prediction. In this paper, we introduce the Knowledge Graph Large Language Model (KG-LLM), a novel framework that leverages large language models (LLMs) for knowledge graph tasks. We first convert structured knowledge graph data into natural language and then use these natural language prompts to fine-tune LLMs to enhance multi-hop link prediction in KGs. By converting the KG to natural language prompts, our framework is designed to learn the latent representations of entities and their interrelations. To show the efficacy of the KG-LLM Framework, we fine-tune three leading LLMs within this framework, including Flan-T5, LLaMa2 and Gemma. Further, we explore the framework’s potential to provide LLMs with zero-shot capabilities for handling previously unseen prompts. Experimental results show that KG-LLM significantly improves the models’ generalization capabilities, leading to more accurate predictions in unfamiliar scenarios.
APA
Shu, D., Chen, T., Jin, M., Zhang, C., Du, M. & Zhang, Y.. (2025). Knowledge Graph Large Language Model (KG-LLM) for Link Prediction. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:143-158 Available from https://proceedings.mlr.press/v260/shu25a.html.

Related Material