Stochastic Experience-Replay for Graph Continual Learning

Arnab Kumar Mondal, Jay Nandy, Manohar Kaul, Mahesh Chandran
Proceedings of the Third Learning on Graphs Conference, PMLR 269:32:1-32:16, 2025.

Abstract

Experience Replay (ER) methods in graph continual learning (GCL) mitigate catastrophic forgetting by storing and replaying historical tasks. However, these methods often struggle with efficiently storing tasks in a compact memory buffer, affecting scalability. While recently proposed graph condensation techniques address this by summarizing historical graphs, they often inadequately capture variations within the distribution of historical tasks. In this paper, we propose a novel framework, called \ast Stochastic Experience Replay for GCL (SERGCL)\ast , by incorporating a \ast stochastic memory buffer (SMB)\ast that parameterizes a kernel function to estimate the distribution density of condensed graphs for each historical task. This allows efficient sampling of condensed graphs, leading to better coverage of historical tasks in the memory buffer and improved experience replay. Our experimental results on four benchmark datasets demonstrate that our proposed SERGCL framework achieves up to an 8.5% improvement of the \ast average performance\ast compared to the current state-of-the-art GCL models. Our code is available at: \href{https://github.com/jayjaynandy/sergcl}{https://github.com/jayjaynandy/sergcl}

Cite this Paper


BibTeX
@InProceedings{pmlr-v269-mondal25a, title = {Stochastic Experience-Replay for Graph Continual Learning}, author = {Mondal, Arnab Kumar and Nandy, Jay and Kaul, Manohar and Chandran, Mahesh}, booktitle = {Proceedings of the Third Learning on Graphs Conference}, pages = {32:1--32:16}, year = {2025}, editor = {Wolf, Guy and Krishnaswamy, Smita}, volume = {269}, series = {Proceedings of Machine Learning Research}, month = {26--29 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v269/main/assets/mondal25a/mondal25a.pdf}, url = {https://proceedings.mlr.press/v269/mondal25a.html}, abstract = {Experience Replay (ER) methods in graph continual learning (GCL) mitigate catastrophic forgetting by storing and replaying historical tasks. However, these methods often struggle with efficiently storing tasks in a compact memory buffer, affecting scalability. While recently proposed graph condensation techniques address this by summarizing historical graphs, they often inadequately capture variations within the distribution of historical tasks. In this paper, we propose a novel framework, called \ast Stochastic Experience Replay for GCL (SERGCL)\ast , by incorporating a \ast stochastic memory buffer (SMB)\ast that parameterizes a kernel function to estimate the distribution density of condensed graphs for each historical task. This allows efficient sampling of condensed graphs, leading to better coverage of historical tasks in the memory buffer and improved experience replay. Our experimental results on four benchmark datasets demonstrate that our proposed SERGCL framework achieves up to an 8.5% improvement of the \ast average performance\ast compared to the current state-of-the-art GCL models. Our code is available at: \href{https://github.com/jayjaynandy/sergcl}{https://github.com/jayjaynandy/sergcl}} }
Endnote
%0 Conference Paper %T Stochastic Experience-Replay for Graph Continual Learning %A Arnab Kumar Mondal %A Jay Nandy %A Manohar Kaul %A Mahesh Chandran %B Proceedings of the Third Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2025 %E Guy Wolf %E Smita Krishnaswamy %F pmlr-v269-mondal25a %I PMLR %P 32:1--32:16 %U https://proceedings.mlr.press/v269/mondal25a.html %V 269 %X Experience Replay (ER) methods in graph continual learning (GCL) mitigate catastrophic forgetting by storing and replaying historical tasks. However, these methods often struggle with efficiently storing tasks in a compact memory buffer, affecting scalability. While recently proposed graph condensation techniques address this by summarizing historical graphs, they often inadequately capture variations within the distribution of historical tasks. In this paper, we propose a novel framework, called \ast Stochastic Experience Replay for GCL (SERGCL)\ast , by incorporating a \ast stochastic memory buffer (SMB)\ast that parameterizes a kernel function to estimate the distribution density of condensed graphs for each historical task. This allows efficient sampling of condensed graphs, leading to better coverage of historical tasks in the memory buffer and improved experience replay. Our experimental results on four benchmark datasets demonstrate that our proposed SERGCL framework achieves up to an 8.5% improvement of the \ast average performance\ast compared to the current state-of-the-art GCL models. Our code is available at: \href{https://github.com/jayjaynandy/sergcl}{https://github.com/jayjaynandy/sergcl}
APA
Mondal, A.K., Nandy, J., Kaul, M. & Chandran, M.. (2025). Stochastic Experience-Replay for Graph Continual Learning. Proceedings of the Third Learning on Graphs Conference, in Proceedings of Machine Learning Research 269:32:1-32:16 Available from https://proceedings.mlr.press/v269/mondal25a.html.

Related Material