A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks

Biswadeep Chakraborty, Harshit Kumar, Saibal Mukhopadhyay
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:7250-7275, 2025.

Abstract

Graph Neural Networks (GNNs) face a critical limitation known as oversmoothing, where increasing network depth leads to homogenized node representations, severely compromising their expressiveness. We present a novel dynamical systems perspective on this challenge, revealing oversmoothing as an emergent property of GNNs’ convergence to low-dimensional attractor states. Based on this insight, we introduce DYNAMO-GAT, which combines noise-driven covariance analysis with Anti-Hebbian learning to dynamically prune attention weights, effectively preserving distinct attractor states. We provide theoretical guarantees for DYNAMO-GAT’s effectiveness and demonstrate its superior performance on benchmark datasets, consistently outperforming existing methods while requiring fewer computational resources. This work establishes a fundamental connection between dynamical systems theory and GNN behavior, providing both theoretical insights and practical solutions for deep graph learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chakraborty25a, title = {A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks}, author = {Chakraborty, Biswadeep and Kumar, Harshit and Mukhopadhyay, Saibal}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {7250--7275}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chakraborty25a/chakraborty25a.pdf}, url = {https://proceedings.mlr.press/v267/chakraborty25a.html}, abstract = {Graph Neural Networks (GNNs) face a critical limitation known as oversmoothing, where increasing network depth leads to homogenized node representations, severely compromising their expressiveness. We present a novel dynamical systems perspective on this challenge, revealing oversmoothing as an emergent property of GNNs’ convergence to low-dimensional attractor states. Based on this insight, we introduce DYNAMO-GAT, which combines noise-driven covariance analysis with Anti-Hebbian learning to dynamically prune attention weights, effectively preserving distinct attractor states. We provide theoretical guarantees for DYNAMO-GAT’s effectiveness and demonstrate its superior performance on benchmark datasets, consistently outperforming existing methods while requiring fewer computational resources. This work establishes a fundamental connection between dynamical systems theory and GNN behavior, providing both theoretical insights and practical solutions for deep graph learning.} }
Endnote
%0 Conference Paper %T A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks %A Biswadeep Chakraborty %A Harshit Kumar %A Saibal Mukhopadhyay %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chakraborty25a %I PMLR %P 7250--7275 %U https://proceedings.mlr.press/v267/chakraborty25a.html %V 267 %X Graph Neural Networks (GNNs) face a critical limitation known as oversmoothing, where increasing network depth leads to homogenized node representations, severely compromising their expressiveness. We present a novel dynamical systems perspective on this challenge, revealing oversmoothing as an emergent property of GNNs’ convergence to low-dimensional attractor states. Based on this insight, we introduce DYNAMO-GAT, which combines noise-driven covariance analysis with Anti-Hebbian learning to dynamically prune attention weights, effectively preserving distinct attractor states. We provide theoretical guarantees for DYNAMO-GAT’s effectiveness and demonstrate its superior performance on benchmark datasets, consistently outperforming existing methods while requiring fewer computational resources. This work establishes a fundamental connection between dynamical systems theory and GNN behavior, providing both theoretical insights and practical solutions for deep graph learning.
APA
Chakraborty, B., Kumar, H. & Mukhopadhyay, S.. (2025). A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Attention Networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:7250-7275 Available from https://proceedings.mlr.press/v267/chakraborty25a.html.

Related Material