On Explainability of Graph Neural Networks via Subgraph Explorations

Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, Shuiwang Ji
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12241-12252, 2021.

Abstract

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-yuan21c, title = {On Explainability of Graph Neural Networks via Subgraph Explorations}, author = {Yuan, Hao and Yu, Haiyang and Wang, Jie and Li, Kang and Ji, Shuiwang}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {12241--12252}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/yuan21c/yuan21c.pdf}, url = {https://proceedings.mlr.press/v139/yuan21c.html}, abstract = {We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.} }
Endnote
%0 Conference Paper %T On Explainability of Graph Neural Networks via Subgraph Explorations %A Hao Yuan %A Haiyang Yu %A Jie Wang %A Kang Li %A Shuiwang Ji %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-yuan21c %I PMLR %P 12241--12252 %U https://proceedings.mlr.press/v139/yuan21c.html %V 139 %X We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
APA
Yuan, H., Yu, H., Wang, J., Li, K. & Ji, S.. (2021). On Explainability of Graph Neural Networks via Subgraph Explorations. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:12241-12252 Available from https://proceedings.mlr.press/v139/yuan21c.html.

Related Material