Tomorrow Brings Greater Knowledge: Large Language Models Join Dynamic Temporal Knowledge Graphs

Christian Di Maio, Andrea Zugarini, Francesco Giannini, Marco Maggini, Stefano Melacci
Proceedings of The 3rd Conference on Lifelong Learning Agents, PMLR 274:560-576, 2025.

Abstract

Large Language Models (LLMs) have demonstrated exceptional capabilities in understanding and generating human-like text. In this paper, we leverage their powerful skills in the scope of lifelong learning agents. Instead of relying on fine-tuning procedures, we exploit Temporal Knowledge Graphs (TKGs) to continually store and update fresh information. In particular, we introduce a novel in-context learning approach called Continual In-context Knowledge LLM (CIK-LLM) capable of bridging an LLM with a dynamically changing TKG. The graph is updated whenever new knowledge becomes available, while the LLM is instructed to find the relational paths that are most relevant to the input instruction, with the goal of identifying smaller subgraphs of evidences. We propose to encode the subgraphs in a compressed-and-prompt-friendly manner, efficiently bridging the LLMs and TKGs. Then, the LLM provides an answer which is conditioned to the knowledge in the graph, exploiting its skills to support the reasoning process. We evaluate our approach on a TKG Question Answering benchmark which includes questions about events that happened at different times. The same questions are asked to models equipped with obsolete or incomplete information and models including progressively more up-to-date knowledge. CIK-LLM overcomes pre-trained LLMs, being able to immediately adapt to newly accumulated knowledge, and it reaches performances that are not far from the ones of a state-of-the art model which is trained not only exploiting LLMs but also large datasets of questions and answers. Furthermore, our model represents a valuable "forgetting-free” approach to quickly adapt an LLM to novel domains without any fine-tuning, QA datasets, or incremental learning procedures.

Cite this Paper


BibTeX
@InProceedings{pmlr-v274-maio25a, title = {Tomorrow Brings Greater Knowledge: Large Language Models Join Dynamic Temporal Knowledge Graphs}, author = {Maio, Christian Di and Zugarini, Andrea and Giannini, Francesco and Maggini, Marco and Melacci, Stefano}, booktitle = {Proceedings of The 3rd Conference on Lifelong Learning Agents}, pages = {560--576}, year = {2025}, editor = {Lomonaco, Vincenzo and Melacci, Stefano and Tuytelaars, Tinne and Chandar, Sarath and Pascanu, Razvan}, volume = {274}, series = {Proceedings of Machine Learning Research}, month = {29 Jul--01 Aug}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v274/main/assets/maio25a/maio25a.pdf}, url = {https://proceedings.mlr.press/v274/maio25a.html}, abstract = {Large Language Models (LLMs) have demonstrated exceptional capabilities in understanding and generating human-like text. In this paper, we leverage their powerful skills in the scope of lifelong learning agents. Instead of relying on fine-tuning procedures, we exploit Temporal Knowledge Graphs (TKGs) to continually store and update fresh information. In particular, we introduce a novel in-context learning approach called Continual In-context Knowledge LLM (CIK-LLM) capable of bridging an LLM with a dynamically changing TKG. The graph is updated whenever new knowledge becomes available, while the LLM is instructed to find the relational paths that are most relevant to the input instruction, with the goal of identifying smaller subgraphs of evidences. We propose to encode the subgraphs in a compressed-and-prompt-friendly manner, efficiently bridging the LLMs and TKGs. Then, the LLM provides an answer which is conditioned to the knowledge in the graph, exploiting its skills to support the reasoning process. We evaluate our approach on a TKG Question Answering benchmark which includes questions about events that happened at different times. The same questions are asked to models equipped with obsolete or incomplete information and models including progressively more up-to-date knowledge. CIK-LLM overcomes pre-trained LLMs, being able to immediately adapt to newly accumulated knowledge, and it reaches performances that are not far from the ones of a state-of-the art model which is trained not only exploiting LLMs but also large datasets of questions and answers. Furthermore, our model represents a valuable "forgetting-free” approach to quickly adapt an LLM to novel domains without any fine-tuning, QA datasets, or incremental learning procedures.} }
Endnote
%0 Conference Paper %T Tomorrow Brings Greater Knowledge: Large Language Models Join Dynamic Temporal Knowledge Graphs %A Christian Di Maio %A Andrea Zugarini %A Francesco Giannini %A Marco Maggini %A Stefano Melacci %B Proceedings of The 3rd Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2025 %E Vincenzo Lomonaco %E Stefano Melacci %E Tinne Tuytelaars %E Sarath Chandar %E Razvan Pascanu %F pmlr-v274-maio25a %I PMLR %P 560--576 %U https://proceedings.mlr.press/v274/maio25a.html %V 274 %X Large Language Models (LLMs) have demonstrated exceptional capabilities in understanding and generating human-like text. In this paper, we leverage their powerful skills in the scope of lifelong learning agents. Instead of relying on fine-tuning procedures, we exploit Temporal Knowledge Graphs (TKGs) to continually store and update fresh information. In particular, we introduce a novel in-context learning approach called Continual In-context Knowledge LLM (CIK-LLM) capable of bridging an LLM with a dynamically changing TKG. The graph is updated whenever new knowledge becomes available, while the LLM is instructed to find the relational paths that are most relevant to the input instruction, with the goal of identifying smaller subgraphs of evidences. We propose to encode the subgraphs in a compressed-and-prompt-friendly manner, efficiently bridging the LLMs and TKGs. Then, the LLM provides an answer which is conditioned to the knowledge in the graph, exploiting its skills to support the reasoning process. We evaluate our approach on a TKG Question Answering benchmark which includes questions about events that happened at different times. The same questions are asked to models equipped with obsolete or incomplete information and models including progressively more up-to-date knowledge. CIK-LLM overcomes pre-trained LLMs, being able to immediately adapt to newly accumulated knowledge, and it reaches performances that are not far from the ones of a state-of-the art model which is trained not only exploiting LLMs but also large datasets of questions and answers. Furthermore, our model represents a valuable "forgetting-free” approach to quickly adapt an LLM to novel domains without any fine-tuning, QA datasets, or incremental learning procedures.
APA
Maio, C.D., Zugarini, A., Giannini, F., Maggini, M. & Melacci, S.. (2025). Tomorrow Brings Greater Knowledge: Large Language Models Join Dynamic Temporal Knowledge Graphs. Proceedings of The 3rd Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 274:560-576 Available from https://proceedings.mlr.press/v274/maio25a.html.

Related Material