Graph Neural Networks (with Proper Weights) Can Escape Oversmoothing

Zhijian Zhuo, Yifei Wang, Jinwen Ma, Yisen Wang
Proceedings of the 16th Asian Conference on Machine Learning, PMLR 260:17-32, 2025.

Abstract

Graph Neural Networks (GNNs) are known to suffer from degraded performance with more layers. Most prior works explained it from graph propagation, arguing that it inevitably leads to indistinguishable node features under more depth, known as *oversmoothing*. However, we notice that these analyses largely ignore the role of GNN weights either directly or by unrealistically strong assumptions. In this paper, we rediscover the role of GNN weights on oversmoothing with a systematic study. Notably, contrary to previous findings, we show that when learned freely, there always exist ideal weights such that vanilla GNNs completely avoid oversmoothing, even after infinite propagation steps. It indicates that oversmoothing is a problem of learning disabilities instead of the doom of GNNs themselves. To facilitate the learning of proper weights, we propose Weight Reparameterization (**WeightRep**) as a way to adaptively maintain the ideal weights in vanilla GNNs along the learning process. We theoretically show that for linear GNNs, WeightRep can always mitigate oversmoothing (full collapse) as well as dimensional collapse. Extensive experiments on nine benchmark datasets demonstrate its effectiveness and efficiency in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v260-zhuo25a, title = {Graph Neural Networks (with Proper Weights) Can Escape Oversmoothing}, author = {Zhuo, Zhijian and Wang, Yifei and Ma, Jinwen and Wang, Yisen}, booktitle = {Proceedings of the 16th Asian Conference on Machine Learning}, pages = {17--32}, year = {2025}, editor = {Nguyen, Vu and Lin, Hsuan-Tien}, volume = {260}, series = {Proceedings of Machine Learning Research}, month = {05--08 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v260/main/assets/zhuo25a/zhuo25a.pdf}, url = {https://proceedings.mlr.press/v260/zhuo25a.html}, abstract = {Graph Neural Networks (GNNs) are known to suffer from degraded performance with more layers. Most prior works explained it from graph propagation, arguing that it inevitably leads to indistinguishable node features under more depth, known as *oversmoothing*. However, we notice that these analyses largely ignore the role of GNN weights either directly or by unrealistically strong assumptions. In this paper, we rediscover the role of GNN weights on oversmoothing with a systematic study. Notably, contrary to previous findings, we show that when learned freely, there always exist ideal weights such that vanilla GNNs completely avoid oversmoothing, even after infinite propagation steps. It indicates that oversmoothing is a problem of learning disabilities instead of the doom of GNNs themselves. To facilitate the learning of proper weights, we propose Weight Reparameterization (**WeightRep**) as a way to adaptively maintain the ideal weights in vanilla GNNs along the learning process. We theoretically show that for linear GNNs, WeightRep can always mitigate oversmoothing (full collapse) as well as dimensional collapse. Extensive experiments on nine benchmark datasets demonstrate its effectiveness and efficiency in practice.} }
Endnote
%0 Conference Paper %T Graph Neural Networks (with Proper Weights) Can Escape Oversmoothing %A Zhijian Zhuo %A Yifei Wang %A Jinwen Ma %A Yisen Wang %B Proceedings of the 16th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Vu Nguyen %E Hsuan-Tien Lin %F pmlr-v260-zhuo25a %I PMLR %P 17--32 %U https://proceedings.mlr.press/v260/zhuo25a.html %V 260 %X Graph Neural Networks (GNNs) are known to suffer from degraded performance with more layers. Most prior works explained it from graph propagation, arguing that it inevitably leads to indistinguishable node features under more depth, known as *oversmoothing*. However, we notice that these analyses largely ignore the role of GNN weights either directly or by unrealistically strong assumptions. In this paper, we rediscover the role of GNN weights on oversmoothing with a systematic study. Notably, contrary to previous findings, we show that when learned freely, there always exist ideal weights such that vanilla GNNs completely avoid oversmoothing, even after infinite propagation steps. It indicates that oversmoothing is a problem of learning disabilities instead of the doom of GNNs themselves. To facilitate the learning of proper weights, we propose Weight Reparameterization (**WeightRep**) as a way to adaptively maintain the ideal weights in vanilla GNNs along the learning process. We theoretically show that for linear GNNs, WeightRep can always mitigate oversmoothing (full collapse) as well as dimensional collapse. Extensive experiments on nine benchmark datasets demonstrate its effectiveness and efficiency in practice.
APA
Zhuo, Z., Wang, Y., Ma, J. & Wang, Y.. (2025). Graph Neural Networks (with Proper Weights) Can Escape Oversmoothing. Proceedings of the 16th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 260:17-32 Available from https://proceedings.mlr.press/v260/zhuo25a.html.

Related Material