Aggregation Buffer: Revisiting DropEdge with a New Parameter Block

Dooho Lee, Myeong Kong, Sagad Hamid, Cheonwoo Lee, Jaemin Yoo
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:33181-33204, 2025.

Abstract

We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures. Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-lee25n, title = {Aggregation Buffer: Revisiting {D}rop{E}dge with a New Parameter Block}, author = {Lee, Dooho and Kong, Myeong and Hamid, Sagad and Lee, Cheonwoo and Yoo, Jaemin}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {33181--33204}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/lee25n/lee25n.pdf}, url = {https://proceedings.mlr.press/v267/lee25n.html}, abstract = {We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures. Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.} }
Endnote
%0 Conference Paper %T Aggregation Buffer: Revisiting DropEdge with a New Parameter Block %A Dooho Lee %A Myeong Kong %A Sagad Hamid %A Cheonwoo Lee %A Jaemin Yoo %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-lee25n %I PMLR %P 33181--33204 %U https://proceedings.mlr.press/v267/lee25n.html %V 267 %X We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures. Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.
APA
Lee, D., Kong, M., Hamid, S., Lee, C. & Yoo, J.. (2025). Aggregation Buffer: Revisiting DropEdge with a New Parameter Block. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:33181-33204 Available from https://proceedings.mlr.press/v267/lee25n.html.

Related Material