Adapting Precomputed Features for Efficient Graph Condensation

Yuan Li, Jun Hu, Zemin Liu, Bryan Hooi, Jia Chen, Bingsheng He
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:34604-34617, 2025.

Abstract

Graph Neural Networks (GNNs) face significant computational challenges when handling large-scale graphs. To address this, Graph Condensation (GC) methods aim to compress large graphs into smaller, synthetic ones that are more manageable for GNN training. Recently, trajectory matching methods have shown state-of-the-art (SOTA) performance for GC, aligning the model’s training behavior on a condensed graph with that on the original graph by guiding the trajectory of model parameters. However, these approaches require repetitive GNN retraining during condensation, making them computationally expensive. To address the efficiency issue, we completely bypass trajectory matching and propose a novel two-stage framework. The first stage, a precomputation stage, performs one-time message passing to extract structural and semantic information from the original graph. The second stage, a diversity-aware adaptation stage, performs class-wise alignment while maximizing the diversity of synthetic features. Remarkably, even with just the precomputation stage, which takes only seconds, our method either matches or surpasses 5 out of 9 baseline results. Extensive experiments show that our approach achieves comparable or better performance while being 96$\times$ to 2,455$\times$ faster than SOTA methods, making it more practical for large-scale GNN applications. Our code and data are available at https://github.com/Xtra-Computing/GCPA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-li25aa, title = {Adapting Precomputed Features for Efficient Graph Condensation}, author = {Li, Yuan and Hu, Jun and Liu, Zemin and Hooi, Bryan and Chen, Jia and He, Bingsheng}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {34604--34617}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/li25aa/li25aa.pdf}, url = {https://proceedings.mlr.press/v267/li25aa.html}, abstract = {Graph Neural Networks (GNNs) face significant computational challenges when handling large-scale graphs. To address this, Graph Condensation (GC) methods aim to compress large graphs into smaller, synthetic ones that are more manageable for GNN training. Recently, trajectory matching methods have shown state-of-the-art (SOTA) performance for GC, aligning the model’s training behavior on a condensed graph with that on the original graph by guiding the trajectory of model parameters. However, these approaches require repetitive GNN retraining during condensation, making them computationally expensive. To address the efficiency issue, we completely bypass trajectory matching and propose a novel two-stage framework. The first stage, a precomputation stage, performs one-time message passing to extract structural and semantic information from the original graph. The second stage, a diversity-aware adaptation stage, performs class-wise alignment while maximizing the diversity of synthetic features. Remarkably, even with just the precomputation stage, which takes only seconds, our method either matches or surpasses 5 out of 9 baseline results. Extensive experiments show that our approach achieves comparable or better performance while being 96$\times$ to 2,455$\times$ faster than SOTA methods, making it more practical for large-scale GNN applications. Our code and data are available at https://github.com/Xtra-Computing/GCPA.} }
Endnote
%0 Conference Paper %T Adapting Precomputed Features for Efficient Graph Condensation %A Yuan Li %A Jun Hu %A Zemin Liu %A Bryan Hooi %A Jia Chen %A Bingsheng He %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-li25aa %I PMLR %P 34604--34617 %U https://proceedings.mlr.press/v267/li25aa.html %V 267 %X Graph Neural Networks (GNNs) face significant computational challenges when handling large-scale graphs. To address this, Graph Condensation (GC) methods aim to compress large graphs into smaller, synthetic ones that are more manageable for GNN training. Recently, trajectory matching methods have shown state-of-the-art (SOTA) performance for GC, aligning the model’s training behavior on a condensed graph with that on the original graph by guiding the trajectory of model parameters. However, these approaches require repetitive GNN retraining during condensation, making them computationally expensive. To address the efficiency issue, we completely bypass trajectory matching and propose a novel two-stage framework. The first stage, a precomputation stage, performs one-time message passing to extract structural and semantic information from the original graph. The second stage, a diversity-aware adaptation stage, performs class-wise alignment while maximizing the diversity of synthetic features. Remarkably, even with just the precomputation stage, which takes only seconds, our method either matches or surpasses 5 out of 9 baseline results. Extensive experiments show that our approach achieves comparable or better performance while being 96$\times$ to 2,455$\times$ faster than SOTA methods, making it more practical for large-scale GNN applications. Our code and data are available at https://github.com/Xtra-Computing/GCPA.
APA
Li, Y., Hu, J., Liu, Z., Hooi, B., Chen, J. & He, B.. (2025). Adapting Precomputed Features for Efficient Graph Condensation. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:34604-34617 Available from https://proceedings.mlr.press/v267/li25aa.html.

Related Material