Pruning for GNNs: Lower Complexity with Comparable Expressiveness

Dun Ma, Jianguo Chen, Wenguo Yang, Suixiang Gao, Shengminjie Chen
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:41854-41889, 2025.

Abstract

In recent years, the pursuit of higher expressive power in graph neural networks (GNNs) has often led to more complex aggregation mechanisms and deeper architectures. To address these issues, we have identified redundant structures in GNNs, and by pruning them, we propose Pruned MP-GNNs, K-Path GNNs, and K-Hop GNNs based on their original architectures. We show that 1) Although some structures are pruned in Pruned MP-GNNs and Pruned K-Path GNNs, their expressive power has not been compromised. 2) K-Hop MP-GNNs and their pruned architecture exhibit equivalent expressiveness on regular and strongly regular graphs. 3) The complexity of pruned K-Path GNNs and pruned K-Hop GNNs is lower than that of MP-GNNs, yet their expressive power is higher. Experimental results validate our refinements, demonstrating competitive performance across benchmark datasets with improved efficiency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ma25e, title = {Pruning for {GNN}s: Lower Complexity with Comparable Expressiveness}, author = {Ma, Dun and Chen, Jianguo and Yang, Wenguo and Gao, Suixiang and Chen, Shengminjie}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {41854--41889}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ma25e/ma25e.pdf}, url = {https://proceedings.mlr.press/v267/ma25e.html}, abstract = {In recent years, the pursuit of higher expressive power in graph neural networks (GNNs) has often led to more complex aggregation mechanisms and deeper architectures. To address these issues, we have identified redundant structures in GNNs, and by pruning them, we propose Pruned MP-GNNs, K-Path GNNs, and K-Hop GNNs based on their original architectures. We show that 1) Although some structures are pruned in Pruned MP-GNNs and Pruned K-Path GNNs, their expressive power has not been compromised. 2) K-Hop MP-GNNs and their pruned architecture exhibit equivalent expressiveness on regular and strongly regular graphs. 3) The complexity of pruned K-Path GNNs and pruned K-Hop GNNs is lower than that of MP-GNNs, yet their expressive power is higher. Experimental results validate our refinements, demonstrating competitive performance across benchmark datasets with improved efficiency.} }
Endnote
%0 Conference Paper %T Pruning for GNNs: Lower Complexity with Comparable Expressiveness %A Dun Ma %A Jianguo Chen %A Wenguo Yang %A Suixiang Gao %A Shengminjie Chen %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ma25e %I PMLR %P 41854--41889 %U https://proceedings.mlr.press/v267/ma25e.html %V 267 %X In recent years, the pursuit of higher expressive power in graph neural networks (GNNs) has often led to more complex aggregation mechanisms and deeper architectures. To address these issues, we have identified redundant structures in GNNs, and by pruning them, we propose Pruned MP-GNNs, K-Path GNNs, and K-Hop GNNs based on their original architectures. We show that 1) Although some structures are pruned in Pruned MP-GNNs and Pruned K-Path GNNs, their expressive power has not been compromised. 2) K-Hop MP-GNNs and their pruned architecture exhibit equivalent expressiveness on regular and strongly regular graphs. 3) The complexity of pruned K-Path GNNs and pruned K-Hop GNNs is lower than that of MP-GNNs, yet their expressive power is higher. Experimental results validate our refinements, demonstrating competitive performance across benchmark datasets with improved efficiency.
APA
Ma, D., Chen, J., Yang, W., Gao, S. & Chen, S.. (2025). Pruning for GNNs: Lower Complexity with Comparable Expressiveness. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:41854-41889 Available from https://proceedings.mlr.press/v267/ma25e.html.

Related Material