Mitigating Oversmoothing Through Reverse Process of GNNs for Heterophilic Graphs

Moonjeong Park, Jaeseung Heo, Dongwoo Kim
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:39667-39681, 2024.

Abstract

Graph Neural Network (GNN) resembles the diffusion process, leading to the over-smoothing of learned representations when stacking many layers. Hence, the reverse process of message passing can produce the distinguishable node representations by inverting the forward message propagation. The distinguishable representations can help us to better classify neighboring nodes with different labels, such as in heterophilic graphs. In this work, we apply the design principle of the reverse process to the three variants of the GNNs. Through the experiments on heterophilic graph data, where adjacent nodes need to have different representations for successful classification, we show that the reverse process significantly improves the prediction performance in many cases. Additional analysis reveals that the reverse mechanism can mitigate the over-smoothing over hundreds of layers. Our code is available at https://github.com/ml-postech/reverse-gnn.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-park24d, title = {Mitigating Oversmoothing Through Reverse Process of {GNN}s for Heterophilic Graphs}, author = {Park, Moonjeong and Heo, Jaeseung and Kim, Dongwoo}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {39667--39681}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/park24d/park24d.pdf}, url = {https://proceedings.mlr.press/v235/park24d.html}, abstract = {Graph Neural Network (GNN) resembles the diffusion process, leading to the over-smoothing of learned representations when stacking many layers. Hence, the reverse process of message passing can produce the distinguishable node representations by inverting the forward message propagation. The distinguishable representations can help us to better classify neighboring nodes with different labels, such as in heterophilic graphs. In this work, we apply the design principle of the reverse process to the three variants of the GNNs. Through the experiments on heterophilic graph data, where adjacent nodes need to have different representations for successful classification, we show that the reverse process significantly improves the prediction performance in many cases. Additional analysis reveals that the reverse mechanism can mitigate the over-smoothing over hundreds of layers. Our code is available at https://github.com/ml-postech/reverse-gnn.} }
Endnote
%0 Conference Paper %T Mitigating Oversmoothing Through Reverse Process of GNNs for Heterophilic Graphs %A Moonjeong Park %A Jaeseung Heo %A Dongwoo Kim %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-park24d %I PMLR %P 39667--39681 %U https://proceedings.mlr.press/v235/park24d.html %V 235 %X Graph Neural Network (GNN) resembles the diffusion process, leading to the over-smoothing of learned representations when stacking many layers. Hence, the reverse process of message passing can produce the distinguishable node representations by inverting the forward message propagation. The distinguishable representations can help us to better classify neighboring nodes with different labels, such as in heterophilic graphs. In this work, we apply the design principle of the reverse process to the three variants of the GNNs. Through the experiments on heterophilic graph data, where adjacent nodes need to have different representations for successful classification, we show that the reverse process significantly improves the prediction performance in many cases. Additional analysis reveals that the reverse mechanism can mitigate the over-smoothing over hundreds of layers. Our code is available at https://github.com/ml-postech/reverse-gnn.
APA
Park, M., Heo, J. & Kim, D.. (2024). Mitigating Oversmoothing Through Reverse Process of GNNs for Heterophilic Graphs. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:39667-39681 Available from https://proceedings.mlr.press/v235/park24d.html.

Related Material