[edit]
Scalable and Efficient Temporal Graph Representation Learning via Forward Recent Sampling
Proceedings of the Third Learning on Graphs Conference, PMLR 269:39:1-39:20, 2025.
Abstract
Temporal graph representation learning (TGRL) is essential for modeling dynamic systems in real-world networks. However, traditional TGRL methods, despite their effectiveness, often face significant computational challenges and inference delays due to the inefficient sampling of temporal neighbors. Conventional sampling methods typically involve backtracking through the interaction history of each node. In this paper, we propose a novel TGRL framework, No-Looking-Back (NLB), which overcomes these challenges by introducing a forward recent sampling strategy. This strategy eliminates the need to backtrack through historical interactions by utilizing a GPU-executable, size-constrained hash table for each node. The hash table records a down-sampled set of recent interactions, enabling rapid query responses with minimal inference latency. The maintenance of this hash table is highly efficient, operating with \textdollar O(1)\textdollar complexity. Fully compatible with GPU processing, NLB maximizes programmability, parallelism, and power efficiency. Empirical evaluations demonstrate that NLB not only matches or surpasses state-of-the-art methods in accuracy for tasks like link prediction and node classification across six real-world datasets but also achieves 1.32-4.40\textdollar \times\textdollar faster training, 1.2-7.94\textdollar \times\textdollar greater energy efficiency, and 1.63-12.95\textdollar \times\textdollar lower inference latency compared to competitive baselines.