MemFreezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge

Yue Dai, Liang Liu, Xulong Tang, Youtao Zhang, Jun Yang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:12121-12160, 2025.

Abstract

Temporal graph neural networks (TGNN) have achieved significant momentum in many real-world dynamic graph tasks. While most existing TGNN attack methods assume worst-case scenarios where attackers have complete knowledge of the input graph, the assumption may not always hold in real-world situations, where attackers can, at best, access information about existing nodes and edges but not future ones after the attack. However, studying adversarial attacks under these constraints is crucial, as limited future knowledge can reveal TGNN vulnerabilities overlooked in idealized settings. Nevertheless, designing effective attacks in such scenarios is challenging: the evolving graph can weaken their impact and make it hard to affect unseen nodes. To address these challenges, we introduce MemFreezing, a novel adversarial attack framework that delivers long-lasting and spreading disruptions in TGNNs without requiring post-attack knowledge of the graph. MemFreezing strategically injects fake nodes or edges to push node memories into a stable “frozen state,” reducing their responsiveness to subsequent graph changes and limiting their ability to convey meaningful information. As the graph evolves, these affected nodes maintain and propagate their frozen state through their neighbors. Experimental results show that MemFreezing persistently degrades TGNN performance across various tasks, offering a more enduring adversarial strategy under limited future knowledge.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-dai25j, title = {{M}em{F}reezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge}, author = {Dai, Yue and Liu, Liang and Tang, Xulong and Zhang, Youtao and Yang, Jun}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {12121--12160}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/dai25j/dai25j.pdf}, url = {https://proceedings.mlr.press/v267/dai25j.html}, abstract = {Temporal graph neural networks (TGNN) have achieved significant momentum in many real-world dynamic graph tasks. While most existing TGNN attack methods assume worst-case scenarios where attackers have complete knowledge of the input graph, the assumption may not always hold in real-world situations, where attackers can, at best, access information about existing nodes and edges but not future ones after the attack. However, studying adversarial attacks under these constraints is crucial, as limited future knowledge can reveal TGNN vulnerabilities overlooked in idealized settings. Nevertheless, designing effective attacks in such scenarios is challenging: the evolving graph can weaken their impact and make it hard to affect unseen nodes. To address these challenges, we introduce MemFreezing, a novel adversarial attack framework that delivers long-lasting and spreading disruptions in TGNNs without requiring post-attack knowledge of the graph. MemFreezing strategically injects fake nodes or edges to push node memories into a stable “frozen state,” reducing their responsiveness to subsequent graph changes and limiting their ability to convey meaningful information. As the graph evolves, these affected nodes maintain and propagate their frozen state through their neighbors. Experimental results show that MemFreezing persistently degrades TGNN performance across various tasks, offering a more enduring adversarial strategy under limited future knowledge.} }
Endnote
%0 Conference Paper %T MemFreezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge %A Yue Dai %A Liang Liu %A Xulong Tang %A Youtao Zhang %A Jun Yang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-dai25j %I PMLR %P 12121--12160 %U https://proceedings.mlr.press/v267/dai25j.html %V 267 %X Temporal graph neural networks (TGNN) have achieved significant momentum in many real-world dynamic graph tasks. While most existing TGNN attack methods assume worst-case scenarios where attackers have complete knowledge of the input graph, the assumption may not always hold in real-world situations, where attackers can, at best, access information about existing nodes and edges but not future ones after the attack. However, studying adversarial attacks under these constraints is crucial, as limited future knowledge can reveal TGNN vulnerabilities overlooked in idealized settings. Nevertheless, designing effective attacks in such scenarios is challenging: the evolving graph can weaken their impact and make it hard to affect unseen nodes. To address these challenges, we introduce MemFreezing, a novel adversarial attack framework that delivers long-lasting and spreading disruptions in TGNNs without requiring post-attack knowledge of the graph. MemFreezing strategically injects fake nodes or edges to push node memories into a stable “frozen state,” reducing their responsiveness to subsequent graph changes and limiting their ability to convey meaningful information. As the graph evolves, these affected nodes maintain and propagate their frozen state through their neighbors. Experimental results show that MemFreezing persistently degrades TGNN performance across various tasks, offering a more enduring adversarial strategy under limited future knowledge.
APA
Dai, Y., Liu, L., Tang, X., Zhang, Y. & Yang, J.. (2025). MemFreezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:12121-12160 Available from https://proceedings.mlr.press/v267/dai25j.html.

Related Material