[edit]
Stochastic Experience-Replay for Graph Continual Learning
Proceedings of the Third Learning on Graphs Conference, PMLR 269:32:1-32:16, 2025.
Abstract
Experience Replay (ER) methods in graph continual learning (GCL) mitigate catastrophic forgetting by storing and replaying historical tasks. However, these methods often struggle with efficiently storing tasks in a compact memory buffer, affecting scalability. While recently proposed graph condensation techniques address this by summarizing historical graphs, they often inadequately capture variations within the distribution of historical tasks. In this paper, we propose a novel framework, called \ast Stochastic Experience Replay for GCL (SERGCL)\ast , by incorporating a \ast stochastic memory buffer (SMB)\ast that parameterizes a kernel function to estimate the distribution density of condensed graphs for each historical task. This allows efficient sampling of condensed graphs, leading to better coverage of historical tasks in the memory buffer and improved experience replay. Our experimental results on four benchmark datasets demonstrate that our proposed SERGCL framework achieves up to an 8.5% improvement of the \ast average performance\ast compared to the current state-of-the-art GCL models. Our code is available at: \href{https://github.com/jayjaynandy/sergcl}{https://github.com/jayjaynandy/sergcl}