Taxonomy of Benchmarks in Graph Representation Learning

Renming Liu, Semih Cantürk, Frederik Wenkel, Sarah McGuire, Xinyi Wang, Anna Little, Leslie O’ Bray, Michael Perlmutter, Bastian Rieck, Matthew Hirn, Guy Wolf, Ladislav Rampášek
Proceedings of the First Learning on Graphs Conference, PMLR 198:6:1-6:25, 2022.

Abstract

Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a \textdollar \textit{sensitivity profile}\textdollar that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach is designed to be extendable to multiple graph prediction task types and future datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-liu22a, title = {Taxonomy of Benchmarks in Graph Representation Learning}, author = {Liu, Renming and Cant{\"u}rk, Semih and Wenkel, Frederik and McGuire, Sarah and Wang, Xinyi and Little, Anna and Bray, Leslie O\textquotesingle and Perlmutter, Michael and Rieck, Bastian and Hirn, Matthew and Wolf, Guy and Ramp{\'a}{\v s}ek, Ladislav}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {6:1--6:25}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/liu22a/liu22a.pdf}, url = {https://proceedings.mlr.press/v198/liu22a.html}, abstract = {Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a \textdollar \textit{sensitivity profile}\textdollar that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach is designed to be extendable to multiple graph prediction task types and future datasets.} }
Endnote
%0 Conference Paper %T Taxonomy of Benchmarks in Graph Representation Learning %A Renming Liu %A Semih Cantürk %A Frederik Wenkel %A Sarah McGuire %A Xinyi Wang %A Anna Little %A Leslie O’ Bray %A Michael Perlmutter %A Bastian Rieck %A Matthew Hirn %A Guy Wolf %A Ladislav Rampášek %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-liu22a %I PMLR %P 6:1--6:25 %U https://proceedings.mlr.press/v198/liu22a.html %V 198 %X Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a \textdollar \textit{sensitivity profile}\textdollar that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach is designed to be extendable to multiple graph prediction task types and future datasets.
APA
Liu, R., Cantürk, S., Wenkel, F., McGuire, S., Wang, X., Little, A., Bray, L.O., Perlmutter, M., Rieck, B., Hirn, M., Wolf, G. & Rampášek, L.. (2022). Taxonomy of Benchmarks in Graph Representation Learning. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:6:1-6:25 Available from https://proceedings.mlr.press/v198/liu22a.html.

Related Material