Scalable and Efficient Temporal Graph Representation Learning via Forward Recent Sampling

Yuhong Luo, Pan Li
Proceedings of the Third Learning on Graphs Conference, PMLR 269:39:1-39:20, 2025.

Abstract

Temporal graph representation learning (TGRL) is essential for modeling dynamic systems in real-world networks. However, traditional TGRL methods, despite their effectiveness, often face significant computational challenges and inference delays due to the inefficient sampling of temporal neighbors. Conventional sampling methods typically involve backtracking through the interaction history of each node. In this paper, we propose a novel TGRL framework, No-Looking-Back (NLB), which overcomes these challenges by introducing a forward recent sampling strategy. This strategy eliminates the need to backtrack through historical interactions by utilizing a GPU-executable, size-constrained hash table for each node. The hash table records a down-sampled set of recent interactions, enabling rapid query responses with minimal inference latency. The maintenance of this hash table is highly efficient, operating with \textdollar O(1)\textdollar complexity. Fully compatible with GPU processing, NLB maximizes programmability, parallelism, and power efficiency. Empirical evaluations demonstrate that NLB not only matches or surpasses state-of-the-art methods in accuracy for tasks like link prediction and node classification across six real-world datasets but also achieves 1.32-4.40\textdollar \times\textdollar faster training, 1.2-7.94\textdollar \times\textdollar greater energy efficiency, and 1.63-12.95\textdollar \times\textdollar lower inference latency compared to competitive baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v269-luo25a, title = {Scalable and Efficient Temporal Graph Representation Learning via Forward Recent Sampling}, author = {Luo, Yuhong and Li, Pan}, booktitle = {Proceedings of the Third Learning on Graphs Conference}, pages = {39:1--39:20}, year = {2025}, editor = {Wolf, Guy and Krishnaswamy, Smita}, volume = {269}, series = {Proceedings of Machine Learning Research}, month = {26--29 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v269/main/assets/luo25a/luo25a.pdf}, url = {https://proceedings.mlr.press/v269/luo25a.html}, abstract = {Temporal graph representation learning (TGRL) is essential for modeling dynamic systems in real-world networks. However, traditional TGRL methods, despite their effectiveness, often face significant computational challenges and inference delays due to the inefficient sampling of temporal neighbors. Conventional sampling methods typically involve backtracking through the interaction history of each node. In this paper, we propose a novel TGRL framework, No-Looking-Back (NLB), which overcomes these challenges by introducing a forward recent sampling strategy. This strategy eliminates the need to backtrack through historical interactions by utilizing a GPU-executable, size-constrained hash table for each node. The hash table records a down-sampled set of recent interactions, enabling rapid query responses with minimal inference latency. The maintenance of this hash table is highly efficient, operating with \textdollar O(1)\textdollar complexity. Fully compatible with GPU processing, NLB maximizes programmability, parallelism, and power efficiency. Empirical evaluations demonstrate that NLB not only matches or surpasses state-of-the-art methods in accuracy for tasks like link prediction and node classification across six real-world datasets but also achieves 1.32-4.40\textdollar \times\textdollar faster training, 1.2-7.94\textdollar \times\textdollar greater energy efficiency, and 1.63-12.95\textdollar \times\textdollar lower inference latency compared to competitive baselines.} }
Endnote
%0 Conference Paper %T Scalable and Efficient Temporal Graph Representation Learning via Forward Recent Sampling %A Yuhong Luo %A Pan Li %B Proceedings of the Third Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2025 %E Guy Wolf %E Smita Krishnaswamy %F pmlr-v269-luo25a %I PMLR %P 39:1--39:20 %U https://proceedings.mlr.press/v269/luo25a.html %V 269 %X Temporal graph representation learning (TGRL) is essential for modeling dynamic systems in real-world networks. However, traditional TGRL methods, despite their effectiveness, often face significant computational challenges and inference delays due to the inefficient sampling of temporal neighbors. Conventional sampling methods typically involve backtracking through the interaction history of each node. In this paper, we propose a novel TGRL framework, No-Looking-Back (NLB), which overcomes these challenges by introducing a forward recent sampling strategy. This strategy eliminates the need to backtrack through historical interactions by utilizing a GPU-executable, size-constrained hash table for each node. The hash table records a down-sampled set of recent interactions, enabling rapid query responses with minimal inference latency. The maintenance of this hash table is highly efficient, operating with \textdollar O(1)\textdollar complexity. Fully compatible with GPU processing, NLB maximizes programmability, parallelism, and power efficiency. Empirical evaluations demonstrate that NLB not only matches or surpasses state-of-the-art methods in accuracy for tasks like link prediction and node classification across six real-world datasets but also achieves 1.32-4.40\textdollar \times\textdollar faster training, 1.2-7.94\textdollar \times\textdollar greater energy efficiency, and 1.63-12.95\textdollar \times\textdollar lower inference latency compared to competitive baselines.
APA
Luo, Y. & Li, P.. (2025). Scalable and Efficient Temporal Graph Representation Learning via Forward Recent Sampling. Proceedings of the Third Learning on Graphs Conference, in Proceedings of Machine Learning Research 269:39:1-39:20 Available from https://proceedings.mlr.press/v269/luo25a.html.

Related Material