On the Power of Heuristics in Temporal Graphs

Filip Cornell, Oleg Smirnov, Gabriela Zarzar Gandler, Lele Cao
Proceedings on "I Can't Believe It's Not Better: Challenges in Applied Deep Learning" at ICLR 2025 Workshops, PMLR 296:37-46, 2025.

Abstract

Dynamic graph datasets often exhibit strong temporal patterns, such as recency, which prioritizes recent interactions, and popularity, which favors frequently occurring nodes. We demonstrate that simple heuristics leveraging only these patterns can perform on par or outperform state-of-the-art neural network models under standard evaluation protocols. To further explore these dynamics, we introduce metrics that quantify the impact of recency and popularity across datasets. Our experiments on BenchTemp and the Temporal Graph Benchmark show that our approaches achieve state-of-the-art performance across all datasets in the latter and secure top ranks on multiple datasets in the former. These results emphasize the importance of refined evaluation schemes to enable fair comparisons and promote the development of more robust temporal graph models. Additionally, they reveal that current deep learning methods often struggle to capture the key patterns underlying predictions in real-world temporal graphs. For reproducibility, we have made our code publicly available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v296-cornell25a, title = {On the Power of Heuristics in Temporal Graphs}, author = {Cornell, Filip and Smirnov, Oleg and Gandler, Gabriela Zarzar and Cao, Lele}, booktitle = {Proceedings on "I Can't Believe It's Not Better: Challenges in Applied Deep Learning" at ICLR 2025 Workshops}, pages = {37--46}, year = {2025}, editor = {Blaas, Arno and D’Costa, Priya and Feng, Fan and Kriegler, Andreas and Mason, Ian and Pan, Zhaoying and Uelwer, Tobias and Williams, Jennifer and Xie, Yubin and Yang, Rui}, volume = {296}, series = {Proceedings of Machine Learning Research}, month = {28 Apr}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v296/main/assets/cornell25a/cornell25a.pdf}, url = {https://proceedings.mlr.press/v296/cornell25a.html}, abstract = {Dynamic graph datasets often exhibit strong temporal patterns, such as recency, which prioritizes recent interactions, and popularity, which favors frequently occurring nodes. We demonstrate that simple heuristics leveraging only these patterns can perform on par or outperform state-of-the-art neural network models under standard evaluation protocols. To further explore these dynamics, we introduce metrics that quantify the impact of recency and popularity across datasets. Our experiments on BenchTemp and the Temporal Graph Benchmark show that our approaches achieve state-of-the-art performance across all datasets in the latter and secure top ranks on multiple datasets in the former. These results emphasize the importance of refined evaluation schemes to enable fair comparisons and promote the development of more robust temporal graph models. Additionally, they reveal that current deep learning methods often struggle to capture the key patterns underlying predictions in real-world temporal graphs. For reproducibility, we have made our code publicly available.} }
Endnote
%0 Conference Paper %T On the Power of Heuristics in Temporal Graphs %A Filip Cornell %A Oleg Smirnov %A Gabriela Zarzar Gandler %A Lele Cao %B Proceedings on "I Can't Believe It's Not Better: Challenges in Applied Deep Learning" at ICLR 2025 Workshops %C Proceedings of Machine Learning Research %D 2025 %E Arno Blaas %E Priya D’Costa %E Fan Feng %E Andreas Kriegler %E Ian Mason %E Zhaoying Pan %E Tobias Uelwer %E Jennifer Williams %E Yubin Xie %E Rui Yang %F pmlr-v296-cornell25a %I PMLR %P 37--46 %U https://proceedings.mlr.press/v296/cornell25a.html %V 296 %X Dynamic graph datasets often exhibit strong temporal patterns, such as recency, which prioritizes recent interactions, and popularity, which favors frequently occurring nodes. We demonstrate that simple heuristics leveraging only these patterns can perform on par or outperform state-of-the-art neural network models under standard evaluation protocols. To further explore these dynamics, we introduce metrics that quantify the impact of recency and popularity across datasets. Our experiments on BenchTemp and the Temporal Graph Benchmark show that our approaches achieve state-of-the-art performance across all datasets in the latter and secure top ranks on multiple datasets in the former. These results emphasize the importance of refined evaluation schemes to enable fair comparisons and promote the development of more robust temporal graph models. Additionally, they reveal that current deep learning methods often struggle to capture the key patterns underlying predictions in real-world temporal graphs. For reproducibility, we have made our code publicly available.
APA
Cornell, F., Smirnov, O., Gandler, G.Z. & Cao, L.. (2025). On the Power of Heuristics in Temporal Graphs. Proceedings on "I Can't Believe It's Not Better: Challenges in Applied Deep Learning" at ICLR 2025 Workshops, in Proceedings of Machine Learning Research 296:37-46 Available from https://proceedings.mlr.press/v296/cornell25a.html.

Related Material