GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks

Kenza Amara, Zhitao Ying, Zitao Zhang, Zhichao Han, Yang Zhao, Yinan Shan, Ulrik Brandes, Sebastian Schemm, Ce Zhang
Proceedings of the First Learning on Graphs Conference, PMLR 198:44:1-44:23, 2022.

Abstract

As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Unfortunately, today’s evaluation frameworks for GNN explainability often rely on few inadequate synthetic datasets, leading to conclusions of limited scope due to a lack of complexity in the problem instances. As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs. In this paper, we propose, to our best knowledge, the first systematic evaluation framework for GNN explainability GraphFramEx, considering explainability on three different "user needs". We propose a unique metric, the characterization score, which combines the fidelity measures and classifies explanations based on their quality of being sufficient or necessary. We scope ourselves to node classification tasks and compare the most representative techniques in the field of input-level explainability for GNNs. We found that personalized PageRank has the best performance for synthetic benchmarks, but gradient-based methods outperform for tasks with complex graph structure. However, none dominates the others on all evaluation dimensions and there is always a trade-off. We further apply our evaluation protocol in a case study for frauds explanation on eBay transaction graphs to reflect the production environment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-amara22a, title = {GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks}, author = {Amara, Kenza and Ying, Zhitao and Zhang, Zitao and Han, Zhichao and Zhao, Yang and Shan, Yinan and Brandes, Ulrik and Schemm, Sebastian and Zhang, Ce}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {44:1--44:23}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/amara22a/amara22a.pdf}, url = {https://proceedings.mlr.press/v198/amara22a.html}, abstract = {As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Unfortunately, today’s evaluation frameworks for GNN explainability often rely on few inadequate synthetic datasets, leading to conclusions of limited scope due to a lack of complexity in the problem instances. As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs. In this paper, we propose, to our best knowledge, the first systematic evaluation framework for GNN explainability GraphFramEx, considering explainability on three different "user needs". We propose a unique metric, the characterization score, which combines the fidelity measures and classifies explanations based on their quality of being sufficient or necessary. We scope ourselves to node classification tasks and compare the most representative techniques in the field of input-level explainability for GNNs. We found that personalized PageRank has the best performance for synthetic benchmarks, but gradient-based methods outperform for tasks with complex graph structure. However, none dominates the others on all evaluation dimensions and there is always a trade-off. We further apply our evaluation protocol in a case study for frauds explanation on eBay transaction graphs to reflect the production environment.} }
Endnote
%0 Conference Paper %T GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks %A Kenza Amara %A Zhitao Ying %A Zitao Zhang %A Zhichao Han %A Yang Zhao %A Yinan Shan %A Ulrik Brandes %A Sebastian Schemm %A Ce Zhang %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-amara22a %I PMLR %P 44:1--44:23 %U https://proceedings.mlr.press/v198/amara22a.html %V 198 %X As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Unfortunately, today’s evaluation frameworks for GNN explainability often rely on few inadequate synthetic datasets, leading to conclusions of limited scope due to a lack of complexity in the problem instances. As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs. In this paper, we propose, to our best knowledge, the first systematic evaluation framework for GNN explainability GraphFramEx, considering explainability on three different "user needs". We propose a unique metric, the characterization score, which combines the fidelity measures and classifies explanations based on their quality of being sufficient or necessary. We scope ourselves to node classification tasks and compare the most representative techniques in the field of input-level explainability for GNNs. We found that personalized PageRank has the best performance for synthetic benchmarks, but gradient-based methods outperform for tasks with complex graph structure. However, none dominates the others on all evaluation dimensions and there is always a trade-off. We further apply our evaluation protocol in a case study for frauds explanation on eBay transaction graphs to reflect the production environment.
APA
Amara, K., Ying, Z., Zhang, Z., Han, Z., Zhao, Y., Shan, Y., Brandes, U., Schemm, S. & Zhang, C.. (2022). GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:44:1-44:23 Available from https://proceedings.mlr.press/v198/amara22a.html.

Related Material